00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2037 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3297 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.006 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.007 The recommended git tool is: git 00:00:00.007 using credential 00000000-0000-0000-0000-000000000002 00:00:00.008 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.022 Fetching changes from the remote Git repository 00:00:00.024 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.039 Using shallow fetch with depth 1 00:00:00.039 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.039 > git --version # timeout=10 00:00:00.057 > git --version # 'git version 2.39.2' 00:00:00.057 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.077 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.077 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.767 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.779 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.790 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:02.790 > git config core.sparsecheckout # timeout=10 00:00:02.803 > git read-tree -mu HEAD # timeout=10 00:00:02.820 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:02.841 Commit message: "packer: Add bios builder" 00:00:02.841 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:03.062 [Pipeline] Start of Pipeline 00:00:03.079 [Pipeline] library 00:00:03.081 Loading library shm_lib@master 00:00:07.184 Library shm_lib@master is cached. Copying from home. 00:00:07.212 [Pipeline] node 00:00:07.329 Running on VM-host-SM9 in /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:07.332 [Pipeline] { 00:00:07.352 [Pipeline] catchError 00:00:07.355 [Pipeline] { 00:00:07.372 [Pipeline] wrap 00:00:07.383 [Pipeline] { 00:00:07.392 [Pipeline] stage 00:00:07.394 [Pipeline] { (Prologue) 00:00:07.413 [Pipeline] echo 00:00:07.415 Node: VM-host-SM9 00:00:07.419 [Pipeline] cleanWs 00:00:07.428 [WS-CLEANUP] Deleting project workspace... 00:00:07.428 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.433 [WS-CLEANUP] done 00:00:07.656 [Pipeline] setCustomBuildProperty 00:00:07.715 [Pipeline] httpRequest 00:00:07.726 [Pipeline] echo 00:00:07.727 Sorcerer 10.211.164.101 is alive 00:00:07.732 [Pipeline] httpRequest 00:00:07.735 HttpMethod: GET 00:00:07.735 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.736 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.743 Response Code: HTTP/1.1 200 OK 00:00:07.744 Success: Status code 200 is in the accepted range: 200,404 00:00:07.744 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:09.644 [Pipeline] sh 00:00:09.926 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:09.947 [Pipeline] httpRequest 00:00:09.975 [Pipeline] echo 00:00:09.977 Sorcerer 10.211.164.101 is alive 00:00:09.987 [Pipeline] httpRequest 00:00:09.992 HttpMethod: GET 00:00:09.993 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:09.993 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:10.010 Response Code: HTTP/1.1 200 OK 00:00:10.010 Success: Status code 200 is in the accepted range: 200,404 00:00:10.011 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:09.336 [Pipeline] sh 00:01:09.641 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:12.199 [Pipeline] sh 00:01:12.478 + git -C spdk log --oneline -n5 00:01:12.478 dbef7efac test: fix dpdk builds on ubuntu24 00:01:12.478 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:12.478 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:12.478 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:12.478 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:12.497 [Pipeline] writeFile 00:01:12.514 [Pipeline] sh 00:01:12.795 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:12.807 [Pipeline] sh 00:01:13.087 + cat autorun-spdk.conf 00:01:13.087 SPDK_TEST_UNITTEST=1 00:01:13.087 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.087 SPDK_TEST_NVME=1 00:01:13.087 SPDK_TEST_BLOCKDEV=1 00:01:13.087 SPDK_RUN_ASAN=1 00:01:13.087 SPDK_RUN_UBSAN=1 00:01:13.087 SPDK_TEST_RAID5=1 00:01:13.087 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.094 RUN_NIGHTLY=1 00:01:13.095 [Pipeline] } 00:01:13.107 [Pipeline] // stage 00:01:13.121 [Pipeline] stage 00:01:13.123 [Pipeline] { (Run VM) 00:01:13.136 [Pipeline] sh 00:01:13.415 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:13.415 + echo 'Start stage prepare_nvme.sh' 00:01:13.415 Start stage prepare_nvme.sh 00:01:13.415 + [[ -n 5 ]] 00:01:13.415 + disk_prefix=ex5 00:01:13.415 + [[ -n /var/jenkins/workspace/ubuntu24-vg-autotest ]] 00:01:13.415 + [[ -e /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf ]] 00:01:13.415 + source /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf 00:01:13.415 ++ SPDK_TEST_UNITTEST=1 00:01:13.415 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.415 ++ SPDK_TEST_NVME=1 00:01:13.415 ++ SPDK_TEST_BLOCKDEV=1 00:01:13.415 ++ SPDK_RUN_ASAN=1 00:01:13.415 ++ SPDK_RUN_UBSAN=1 00:01:13.415 ++ SPDK_TEST_RAID5=1 00:01:13.415 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.415 ++ RUN_NIGHTLY=1 00:01:13.415 + cd /var/jenkins/workspace/ubuntu24-vg-autotest 00:01:13.415 + nvme_files=() 00:01:13.415 + declare -A nvme_files 00:01:13.415 + backend_dir=/var/lib/libvirt/images/backends 00:01:13.415 + nvme_files['nvme.img']=5G 00:01:13.415 + nvme_files['nvme-cmb.img']=5G 00:01:13.415 + nvme_files['nvme-multi0.img']=4G 00:01:13.415 + nvme_files['nvme-multi1.img']=4G 00:01:13.415 + nvme_files['nvme-multi2.img']=4G 00:01:13.415 + nvme_files['nvme-openstack.img']=8G 00:01:13.415 + nvme_files['nvme-zns.img']=5G 00:01:13.415 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:13.415 + (( SPDK_TEST_FTL == 1 )) 00:01:13.415 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:13.415 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:13.415 + for nvme in "${!nvme_files[@]}" 00:01:13.415 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:13.415 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.415 + for nvme in "${!nvme_files[@]}" 00:01:13.415 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:13.415 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.415 + for nvme in "${!nvme_files[@]}" 00:01:13.415 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:13.674 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:13.674 + for nvme in "${!nvme_files[@]}" 00:01:13.674 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:13.674 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.674 + for nvme in "${!nvme_files[@]}" 00:01:13.674 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:13.674 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.674 + for nvme in "${!nvme_files[@]}" 00:01:13.674 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:13.932 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.932 + for nvme in "${!nvme_files[@]}" 00:01:13.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:14.191 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.191 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:14.191 + echo 'End stage prepare_nvme.sh' 00:01:14.191 End stage prepare_nvme.sh 00:01:14.202 [Pipeline] sh 00:01:14.482 + DISTRO=ubuntu2404 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:14.482 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -H -a -v -f ubuntu2404 00:01:14.482 00:01:14.482 DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant 00:01:14.482 SPDK_DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk 00:01:14.482 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu24-vg-autotest 00:01:14.482 HELP=0 00:01:14.482 DRY_RUN=0 00:01:14.482 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img, 00:01:14.482 NVME_DISKS_TYPE=nvme, 00:01:14.482 NVME_AUTO_CREATE=0 00:01:14.482 NVME_DISKS_NAMESPACES=, 00:01:14.482 NVME_CMB=, 00:01:14.482 NVME_PMR=, 00:01:14.482 NVME_ZNS=, 00:01:14.482 NVME_MS=, 00:01:14.482 NVME_FDP=, 00:01:14.482 SPDK_VAGRANT_DISTRO=ubuntu2404 00:01:14.482 SPDK_VAGRANT_VMCPU=10 00:01:14.482 SPDK_VAGRANT_VMRAM=12288 00:01:14.482 SPDK_VAGRANT_PROVIDER=libvirt 00:01:14.482 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:14.482 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:14.482 SPDK_OPENSTACK_NETWORK=0 00:01:14.482 VAGRANT_PACKAGE_BOX=0 00:01:14.482 VAGRANTFILE=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:14.482 FORCE_DISTRO=true 00:01:14.482 VAGRANT_BOX_VERSION= 00:01:14.482 EXTRA_VAGRANTFILES= 00:01:14.482 NIC_MODEL=e1000 00:01:14.482 00:01:14.482 mkdir: created directory '/var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt' 00:01:14.482 /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt /var/jenkins/workspace/ubuntu24-vg-autotest 00:01:17.771 Bringing machine 'default' up with 'libvirt' provider... 00:01:17.771 ==> default: Creating image (snapshot of base box volume). 00:01:18.030 ==> default: Creating domain with the following settings... 00:01:18.030 ==> default: -- Name: ubuntu2404-24.04-1720510786-2314_default_1721969916_d7f38d6967f6c1a0d8a1 00:01:18.030 ==> default: -- Domain type: kvm 00:01:18.030 ==> default: -- Cpus: 10 00:01:18.030 ==> default: -- Feature: acpi 00:01:18.030 ==> default: -- Feature: apic 00:01:18.030 ==> default: -- Feature: pae 00:01:18.030 ==> default: -- Memory: 12288M 00:01:18.030 ==> default: -- Memory Backing: hugepages: 00:01:18.030 ==> default: -- Management MAC: 00:01:18.030 ==> default: -- Loader: 00:01:18.030 ==> default: -- Nvram: 00:01:18.030 ==> default: -- Base box: spdk/ubuntu2404 00:01:18.030 ==> default: -- Storage pool: default 00:01:18.030 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2404-24.04-1720510786-2314_default_1721969916_d7f38d6967f6c1a0d8a1.img (20G) 00:01:18.030 ==> default: -- Volume Cache: default 00:01:18.030 ==> default: -- Kernel: 00:01:18.030 ==> default: -- Initrd: 00:01:18.030 ==> default: -- Graphics Type: vnc 00:01:18.030 ==> default: -- Graphics Port: -1 00:01:18.030 ==> default: -- Graphics IP: 127.0.0.1 00:01:18.030 ==> default: -- Graphics Password: Not defined 00:01:18.030 ==> default: -- Video Type: cirrus 00:01:18.030 ==> default: -- Video VRAM: 9216 00:01:18.030 ==> default: -- Sound Type: 00:01:18.030 ==> default: -- Keymap: en-us 00:01:18.030 ==> default: -- TPM Path: 00:01:18.030 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:18.030 ==> default: -- Command line args: 00:01:18.030 ==> default: -> value=-device, 00:01:18.030 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:18.030 ==> default: -> value=-drive, 00:01:18.030 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:18.030 ==> default: -> value=-device, 00:01:18.030 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.030 ==> default: Creating shared folders metadata... 00:01:18.030 ==> default: Starting domain. 00:01:19.408 ==> default: Waiting for domain to get an IP address... 00:01:29.380 ==> default: Waiting for SSH to become available... 00:01:30.758 ==> default: Configuring and enabling network interfaces... 00:01:36.029 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:41.300 ==> default: Mounting SSHFS shared folder... 00:01:41.558 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output => /home/vagrant/spdk_repo/output 00:01:41.558 ==> default: Checking Mount.. 00:01:42.495 ==> default: Folder Successfully Mounted! 00:01:42.495 ==> default: Running provisioner: file... 00:01:42.754 default: ~/.gitconfig => .gitconfig 00:01:43.014 00:01:43.014 SUCCESS! 00:01:43.014 00:01:43.014 cd to /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt and type "vagrant ssh" to use. 00:01:43.014 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:43.014 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt" to destroy all trace of vm. 00:01:43.014 00:01:43.023 [Pipeline] } 00:01:43.041 [Pipeline] // stage 00:01:43.050 [Pipeline] dir 00:01:43.051 Running in /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt 00:01:43.053 [Pipeline] { 00:01:43.067 [Pipeline] catchError 00:01:43.068 [Pipeline] { 00:01:43.083 [Pipeline] sh 00:01:43.365 + vagrant ssh-config --host vagrant 00:01:43.365 + sed -ne /^Host/,$p 00:01:43.365 + tee ssh_conf 00:01:46.678 Host vagrant 00:01:46.678 HostName 192.168.121.233 00:01:46.678 User vagrant 00:01:46.678 Port 22 00:01:46.678 UserKnownHostsFile /dev/null 00:01:46.678 StrictHostKeyChecking no 00:01:46.678 PasswordAuthentication no 00:01:46.678 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2404/24.04-1720510786-2314/libvirt/ubuntu2404 00:01:46.678 IdentitiesOnly yes 00:01:46.678 LogLevel FATAL 00:01:46.678 ForwardAgent yes 00:01:46.678 ForwardX11 yes 00:01:46.678 00:01:46.692 [Pipeline] withEnv 00:01:46.694 [Pipeline] { 00:01:46.709 [Pipeline] sh 00:01:46.990 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:46.990 source /etc/os-release 00:01:46.990 [[ -e /image.version ]] && img=$(< /image.version) 00:01:46.990 # Minimal, systemd-like check. 00:01:46.990 if [[ -e /.dockerenv ]]; then 00:01:46.990 # Clear garbage from the node's name: 00:01:46.990 # agt-er_autotest_547-896 -> autotest_547-896 00:01:46.990 # $HOSTNAME is the actual container id 00:01:46.990 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:46.990 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:46.990 # We can assume this is a mount from a host where container is running, 00:01:46.990 # so fetch its hostname to easily identify the target swarm worker. 00:01:46.990 container="$(< /etc/hostname) ($agent)" 00:01:46.990 else 00:01:46.990 # Fallback 00:01:46.990 container=$agent 00:01:46.990 fi 00:01:46.990 fi 00:01:46.990 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:46.990 00:01:47.261 [Pipeline] } 00:01:47.280 [Pipeline] // withEnv 00:01:47.289 [Pipeline] setCustomBuildProperty 00:01:47.303 [Pipeline] stage 00:01:47.305 [Pipeline] { (Tests) 00:01:47.320 [Pipeline] sh 00:01:47.600 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:47.917 [Pipeline] sh 00:01:48.212 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:48.488 [Pipeline] timeout 00:01:48.489 Timeout set to expire in 1 hr 30 min 00:01:48.491 [Pipeline] { 00:01:48.507 [Pipeline] sh 00:01:48.790 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:49.358 HEAD is now at dbef7efac test: fix dpdk builds on ubuntu24 00:01:49.370 [Pipeline] sh 00:01:49.651 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:49.925 [Pipeline] sh 00:01:50.205 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:50.484 [Pipeline] sh 00:01:50.765 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu24-vg-autotest ./autoruner.sh spdk_repo 00:01:51.025 ++ readlink -f spdk_repo 00:01:51.025 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:51.025 + [[ -n /home/vagrant/spdk_repo ]] 00:01:51.025 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:51.025 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:51.025 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:51.025 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:51.025 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:51.025 + [[ ubuntu24-vg-autotest == pkgdep-* ]] 00:01:51.025 + cd /home/vagrant/spdk_repo 00:01:51.025 + source /etc/os-release 00:01:51.025 ++ PRETTY_NAME='Ubuntu 24.04 LTS' 00:01:51.025 ++ NAME=Ubuntu 00:01:51.025 ++ VERSION_ID=24.04 00:01:51.025 ++ VERSION='24.04 LTS (Noble Numbat)' 00:01:51.025 ++ VERSION_CODENAME=noble 00:01:51.025 ++ ID=ubuntu 00:01:51.025 ++ ID_LIKE=debian 00:01:51.025 ++ HOME_URL=https://www.ubuntu.com/ 00:01:51.025 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:51.025 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:51.025 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:51.025 ++ UBUNTU_CODENAME=noble 00:01:51.025 ++ LOGO=ubuntu-logo 00:01:51.025 + uname -a 00:01:51.025 Linux ubuntu2404-cloud-1720510786-2314 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:51.025 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:51.025 Hugepages 00:01:51.025 node hugesize free / total 00:01:51.025 node0 1048576kB 0 / 0 00:01:51.025 node0 2048kB 0 / 0 00:01:51.025 00:01:51.025 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:51.285 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:51.285 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:51.285 + rm -f /tmp/spdk-ld-path 00:01:51.285 + source autorun-spdk.conf 00:01:51.285 ++ SPDK_TEST_UNITTEST=1 00:01:51.285 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.285 ++ SPDK_TEST_NVME=1 00:01:51.285 ++ SPDK_TEST_BLOCKDEV=1 00:01:51.285 ++ SPDK_RUN_ASAN=1 00:01:51.285 ++ SPDK_RUN_UBSAN=1 00:01:51.285 ++ SPDK_TEST_RAID5=1 00:01:51.285 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.285 ++ RUN_NIGHTLY=1 00:01:51.285 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:51.285 + [[ -n '' ]] 00:01:51.285 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:51.285 + for M in /var/spdk/build-*-manifest.txt 00:01:51.285 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:51.285 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.285 + for M in /var/spdk/build-*-manifest.txt 00:01:51.285 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:51.285 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.285 ++ uname 00:01:51.285 + [[ Linux == \L\i\n\u\x ]] 00:01:51.285 + sudo dmesg -T 00:01:51.285 + sudo dmesg --clear 00:01:51.285 + dmesg_pid=2359 00:01:51.285 + [[ Ubuntu == FreeBSD ]] 00:01:51.285 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.285 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.285 + sudo dmesg -Tw 00:01:51.285 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:51.285 + [[ -x /usr/src/fio-static/fio ]] 00:01:51.285 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:51.285 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:51.285 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:51.285 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:51.285 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:51.285 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:51.285 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:51.285 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:51.285 Test configuration: 00:01:51.285 SPDK_TEST_UNITTEST=1 00:01:51.285 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.285 SPDK_TEST_NVME=1 00:01:51.285 SPDK_TEST_BLOCKDEV=1 00:01:51.285 SPDK_RUN_ASAN=1 00:01:51.285 SPDK_RUN_UBSAN=1 00:01:51.285 SPDK_TEST_RAID5=1 00:01:51.285 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.285 RUN_NIGHTLY=1 04:59:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:51.285 04:59:09 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:51.285 04:59:09 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:51.285 04:59:09 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:51.285 04:59:09 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:51.285 04:59:09 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:51.285 04:59:09 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:51.285 04:59:09 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:51.285 04:59:09 -- paths/export.sh@6 -- $ export PATH 00:01:51.285 04:59:09 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:51.285 04:59:09 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:51.285 04:59:09 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:51.285 04:59:09 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721969949.XXXXXX 00:01:51.285 04:59:09 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721969949.8z6AZJ 00:01:51.285 04:59:09 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:51.285 04:59:09 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:01:51.285 04:59:09 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:51.285 04:59:09 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:51.285 04:59:09 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:51.285 04:59:09 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:51.285 04:59:09 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:51.285 04:59:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.545 04:59:09 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:51.545 04:59:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:51.545 04:59:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:51.545 04:59:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:51.545 04:59:09 -- spdk/autobuild.sh@16 -- $ date -u 00:01:51.545 Fri Jul 26 04:59:09 UTC 2024 00:01:51.545 04:59:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:51.545 LTS-60-gdbef7efac 00:01:51.545 04:59:09 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:51.545 04:59:09 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:51.545 04:59:09 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:51.545 04:59:09 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:51.545 04:59:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.545 ************************************ 00:01:51.545 START TEST asan 00:01:51.545 ************************************ 00:01:51.545 using asan 00:01:51.545 04:59:09 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:51.545 00:01:51.545 real 0m0.000s 00:01:51.545 user 0m0.000s 00:01:51.545 sys 0m0.000s 00:01:51.545 04:59:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:51.545 04:59:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.545 ************************************ 00:01:51.545 END TEST asan 00:01:51.545 ************************************ 00:01:51.545 04:59:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:51.545 04:59:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:51.545 04:59:09 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:51.545 04:59:09 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:51.545 04:59:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.545 ************************************ 00:01:51.545 START TEST ubsan 00:01:51.545 ************************************ 00:01:51.545 using ubsan 00:01:51.545 04:59:09 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:51.545 00:01:51.545 real 0m0.000s 00:01:51.545 user 0m0.000s 00:01:51.545 sys 0m0.000s 00:01:51.545 04:59:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:51.545 ************************************ 00:01:51.545 END TEST ubsan 00:01:51.545 ************************************ 00:01:51.545 04:59:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.545 04:59:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:51.545 04:59:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:51.545 04:59:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:51.545 04:59:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:51.545 04:59:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:51.545 04:59:10 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:51.545 04:59:10 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:51.545 04:59:10 -- common/autobuild_common.sh@414 -- $ run_test unittest_build _unittest_build 00:01:51.545 04:59:10 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:51.545 04:59:10 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:51.545 04:59:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.545 ************************************ 00:01:51.545 START TEST unittest_build 00:01:51.545 ************************************ 00:01:51.545 04:59:10 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:01:51.545 04:59:10 -- common/autobuild_common.sh@405 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --without-shared 00:01:51.545 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:51.545 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:52.114 Using 'verbs' RDMA provider 00:02:07.950 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:20.156 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:20.156 Creating mk/config.mk...done. 00:02:20.156 Creating mk/cc.flags.mk...done. 00:02:20.156 Type 'make' to build. 00:02:20.156 04:59:38 -- common/autobuild_common.sh@406 -- $ make -j10 00:02:20.156 make[1]: Nothing to be done for 'all'. 00:02:38.232 The Meson build system 00:02:38.232 Version: 1.4.1 00:02:38.232 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:38.232 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:38.232 Build type: native build 00:02:38.232 Program cat found: YES (/usr/bin/cat) 00:02:38.232 Project name: DPDK 00:02:38.232 Project version: 23.11.0 00:02:38.232 C compiler for the host machine: cc (gcc 13.2.0 "cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:02:38.232 C linker for the host machine: cc ld.bfd 2.42 00:02:38.232 Host machine cpu family: x86_64 00:02:38.232 Host machine cpu: x86_64 00:02:38.232 Message: ## Building in Developer Mode ## 00:02:38.232 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:38.232 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:38.232 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:38.232 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:02:38.232 Program cat found: YES (/usr/bin/cat) 00:02:38.232 Compiler for C supports arguments -march=native: YES 00:02:38.232 Checking for size of "void *" : 8 00:02:38.232 Checking for size of "void *" : 8 (cached) 00:02:38.232 Library m found: YES 00:02:38.232 Library numa found: YES 00:02:38.232 Has header "numaif.h" : YES 00:02:38.232 Library fdt found: NO 00:02:38.232 Library execinfo found: NO 00:02:38.233 Has header "execinfo.h" : YES 00:02:38.233 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:02:38.233 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:38.233 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:38.233 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:38.233 Run-time dependency openssl found: YES 3.0.13 00:02:38.233 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:38.233 Library pcap found: NO 00:02:38.233 Compiler for C supports arguments -Wcast-qual: YES 00:02:38.233 Compiler for C supports arguments -Wdeprecated: YES 00:02:38.233 Compiler for C supports arguments -Wformat: YES 00:02:38.233 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:38.233 Compiler for C supports arguments -Wformat-security: YES 00:02:38.233 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:38.233 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:38.233 Compiler for C supports arguments -Wnested-externs: YES 00:02:38.233 Compiler for C supports arguments -Wold-style-definition: YES 00:02:38.233 Compiler for C supports arguments -Wpointer-arith: YES 00:02:38.233 Compiler for C supports arguments -Wsign-compare: YES 00:02:38.233 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:38.233 Compiler for C supports arguments -Wundef: YES 00:02:38.233 Compiler for C supports arguments -Wwrite-strings: YES 00:02:38.233 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:38.233 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:38.233 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:38.233 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:38.233 Program objdump found: YES (/usr/bin/objdump) 00:02:38.233 Compiler for C supports arguments -mavx512f: YES 00:02:38.233 Checking if "AVX512 checking" compiles: YES 00:02:38.233 Fetching value of define "__SSE4_2__" : 1 00:02:38.233 Fetching value of define "__AES__" : 1 00:02:38.233 Fetching value of define "__AVX__" : 1 00:02:38.233 Fetching value of define "__AVX2__" : 1 00:02:38.233 Fetching value of define "__AVX512BW__" : (undefined) 00:02:38.233 Fetching value of define "__AVX512CD__" : (undefined) 00:02:38.233 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:38.233 Fetching value of define "__AVX512F__" : (undefined) 00:02:38.233 Fetching value of define "__AVX512VL__" : (undefined) 00:02:38.233 Fetching value of define "__PCLMUL__" : 1 00:02:38.233 Fetching value of define "__RDRND__" : 1 00:02:38.233 Fetching value of define "__RDSEED__" : 1 00:02:38.233 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:38.233 Fetching value of define "__znver1__" : (undefined) 00:02:38.233 Fetching value of define "__znver2__" : (undefined) 00:02:38.233 Fetching value of define "__znver3__" : (undefined) 00:02:38.233 Fetching value of define "__znver4__" : (undefined) 00:02:38.233 Library asan found: YES 00:02:38.233 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:38.233 Message: lib/log: Defining dependency "log" 00:02:38.233 Message: lib/kvargs: Defining dependency "kvargs" 00:02:38.233 Message: lib/telemetry: Defining dependency "telemetry" 00:02:38.233 Library rt found: YES 00:02:38.233 Checking for function "getentropy" : NO 00:02:38.233 Message: lib/eal: Defining dependency "eal" 00:02:38.233 Message: lib/ring: Defining dependency "ring" 00:02:38.233 Message: lib/rcu: Defining dependency "rcu" 00:02:38.233 Message: lib/mempool: Defining dependency "mempool" 00:02:38.233 Message: lib/mbuf: Defining dependency "mbuf" 00:02:38.233 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:38.233 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:38.233 Compiler for C supports arguments -mpclmul: YES 00:02:38.233 Compiler for C supports arguments -maes: YES 00:02:38.233 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:38.233 Compiler for C supports arguments -mavx512bw: YES 00:02:38.233 Compiler for C supports arguments -mavx512dq: YES 00:02:38.233 Compiler for C supports arguments -mavx512vl: YES 00:02:38.233 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:38.233 Compiler for C supports arguments -mavx2: YES 00:02:38.233 Compiler for C supports arguments -mavx: YES 00:02:38.233 Message: lib/net: Defining dependency "net" 00:02:38.233 Message: lib/meter: Defining dependency "meter" 00:02:38.233 Message: lib/ethdev: Defining dependency "ethdev" 00:02:38.233 Message: lib/pci: Defining dependency "pci" 00:02:38.233 Message: lib/cmdline: Defining dependency "cmdline" 00:02:38.233 Message: lib/hash: Defining dependency "hash" 00:02:38.233 Message: lib/timer: Defining dependency "timer" 00:02:38.233 Message: lib/compressdev: Defining dependency "compressdev" 00:02:38.233 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:38.233 Message: lib/dmadev: Defining dependency "dmadev" 00:02:38.233 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:38.233 Message: lib/power: Defining dependency "power" 00:02:38.233 Message: lib/reorder: Defining dependency "reorder" 00:02:38.233 Message: lib/security: Defining dependency "security" 00:02:38.233 Has header "linux/userfaultfd.h" : YES 00:02:38.233 Has header "linux/vduse.h" : YES 00:02:38.233 Message: lib/vhost: Defining dependency "vhost" 00:02:38.233 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:38.233 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:38.233 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:38.233 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:38.233 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:38.233 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:38.233 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:38.233 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:38.233 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:38.233 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:38.233 Program doxygen found: YES (/usr/bin/doxygen) 00:02:38.233 Configuring doxy-api-html.conf using configuration 00:02:38.233 Configuring doxy-api-man.conf using configuration 00:02:38.233 Program mandb found: YES (/usr/bin/mandb) 00:02:38.233 Program sphinx-build found: NO 00:02:38.233 Configuring rte_build_config.h using configuration 00:02:38.233 Message: 00:02:38.233 ================= 00:02:38.233 Applications Enabled 00:02:38.233 ================= 00:02:38.233 00:02:38.233 apps: 00:02:38.233 00:02:38.233 00:02:38.233 Message: 00:02:38.233 ================= 00:02:38.233 Libraries Enabled 00:02:38.233 ================= 00:02:38.233 00:02:38.233 libs: 00:02:38.233 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:38.233 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:38.233 cryptodev, dmadev, power, reorder, security, vhost, 00:02:38.233 00:02:38.233 Message: 00:02:38.233 =============== 00:02:38.233 Drivers Enabled 00:02:38.233 =============== 00:02:38.233 00:02:38.233 common: 00:02:38.233 00:02:38.233 bus: 00:02:38.233 pci, vdev, 00:02:38.233 mempool: 00:02:38.233 ring, 00:02:38.233 dma: 00:02:38.233 00:02:38.233 net: 00:02:38.233 00:02:38.233 crypto: 00:02:38.233 00:02:38.233 compress: 00:02:38.233 00:02:38.233 vdpa: 00:02:38.233 00:02:38.233 00:02:38.233 Message: 00:02:38.233 ================= 00:02:38.233 Content Skipped 00:02:38.233 ================= 00:02:38.233 00:02:38.233 apps: 00:02:38.233 dumpcap: explicitly disabled via build config 00:02:38.233 graph: explicitly disabled via build config 00:02:38.233 pdump: explicitly disabled via build config 00:02:38.233 proc-info: explicitly disabled via build config 00:02:38.233 test-acl: explicitly disabled via build config 00:02:38.233 test-bbdev: explicitly disabled via build config 00:02:38.233 test-cmdline: explicitly disabled via build config 00:02:38.233 test-compress-perf: explicitly disabled via build config 00:02:38.233 test-crypto-perf: explicitly disabled via build config 00:02:38.233 test-dma-perf: explicitly disabled via build config 00:02:38.233 test-eventdev: explicitly disabled via build config 00:02:38.233 test-fib: explicitly disabled via build config 00:02:38.233 test-flow-perf: explicitly disabled via build config 00:02:38.233 test-gpudev: explicitly disabled via build config 00:02:38.233 test-mldev: explicitly disabled via build config 00:02:38.233 test-pipeline: explicitly disabled via build config 00:02:38.233 test-pmd: explicitly disabled via build config 00:02:38.233 test-regex: explicitly disabled via build config 00:02:38.233 test-sad: explicitly disabled via build config 00:02:38.233 test-security-perf: explicitly disabled via build config 00:02:38.233 00:02:38.233 libs: 00:02:38.233 metrics: explicitly disabled via build config 00:02:38.233 acl: explicitly disabled via build config 00:02:38.233 bbdev: explicitly disabled via build config 00:02:38.233 bitratestats: explicitly disabled via build config 00:02:38.233 bpf: explicitly disabled via build config 00:02:38.233 cfgfile: explicitly disabled via build config 00:02:38.233 distributor: explicitly disabled via build config 00:02:38.233 efd: explicitly disabled via build config 00:02:38.233 eventdev: explicitly disabled via build config 00:02:38.233 dispatcher: explicitly disabled via build config 00:02:38.233 gpudev: explicitly disabled via build config 00:02:38.233 gro: explicitly disabled via build config 00:02:38.233 gso: explicitly disabled via build config 00:02:38.233 ip_frag: explicitly disabled via build config 00:02:38.233 jobstats: explicitly disabled via build config 00:02:38.233 latencystats: explicitly disabled via build config 00:02:38.233 lpm: explicitly disabled via build config 00:02:38.233 member: explicitly disabled via build config 00:02:38.233 pcapng: explicitly disabled via build config 00:02:38.233 rawdev: explicitly disabled via build config 00:02:38.233 regexdev: explicitly disabled via build config 00:02:38.233 mldev: explicitly disabled via build config 00:02:38.233 rib: explicitly disabled via build config 00:02:38.233 sched: explicitly disabled via build config 00:02:38.233 stack: explicitly disabled via build config 00:02:38.233 ipsec: explicitly disabled via build config 00:02:38.234 pdcp: explicitly disabled via build config 00:02:38.234 fib: explicitly disabled via build config 00:02:38.234 port: explicitly disabled via build config 00:02:38.234 pdump: explicitly disabled via build config 00:02:38.234 table: explicitly disabled via build config 00:02:38.234 pipeline: explicitly disabled via build config 00:02:38.234 graph: explicitly disabled via build config 00:02:38.234 node: explicitly disabled via build config 00:02:38.234 00:02:38.234 drivers: 00:02:38.234 common/cpt: not in enabled drivers build config 00:02:38.234 common/dpaax: not in enabled drivers build config 00:02:38.234 common/iavf: not in enabled drivers build config 00:02:38.234 common/idpf: not in enabled drivers build config 00:02:38.234 common/mvep: not in enabled drivers build config 00:02:38.234 common/octeontx: not in enabled drivers build config 00:02:38.234 bus/auxiliary: not in enabled drivers build config 00:02:38.234 bus/cdx: not in enabled drivers build config 00:02:38.234 bus/dpaa: not in enabled drivers build config 00:02:38.234 bus/fslmc: not in enabled drivers build config 00:02:38.234 bus/ifpga: not in enabled drivers build config 00:02:38.234 bus/platform: not in enabled drivers build config 00:02:38.234 bus/vmbus: not in enabled drivers build config 00:02:38.234 common/cnxk: not in enabled drivers build config 00:02:38.234 common/mlx5: not in enabled drivers build config 00:02:38.234 common/nfp: not in enabled drivers build config 00:02:38.234 common/qat: not in enabled drivers build config 00:02:38.234 common/sfc_efx: not in enabled drivers build config 00:02:38.234 mempool/bucket: not in enabled drivers build config 00:02:38.234 mempool/cnxk: not in enabled drivers build config 00:02:38.234 mempool/dpaa: not in enabled drivers build config 00:02:38.234 mempool/dpaa2: not in enabled drivers build config 00:02:38.234 mempool/octeontx: not in enabled drivers build config 00:02:38.234 mempool/stack: not in enabled drivers build config 00:02:38.234 dma/cnxk: not in enabled drivers build config 00:02:38.234 dma/dpaa: not in enabled drivers build config 00:02:38.234 dma/dpaa2: not in enabled drivers build config 00:02:38.234 dma/hisilicon: not in enabled drivers build config 00:02:38.234 dma/idxd: not in enabled drivers build config 00:02:38.234 dma/ioat: not in enabled drivers build config 00:02:38.234 dma/skeleton: not in enabled drivers build config 00:02:38.234 net/af_packet: not in enabled drivers build config 00:02:38.234 net/af_xdp: not in enabled drivers build config 00:02:38.234 net/ark: not in enabled drivers build config 00:02:38.234 net/atlantic: not in enabled drivers build config 00:02:38.234 net/avp: not in enabled drivers build config 00:02:38.234 net/axgbe: not in enabled drivers build config 00:02:38.234 net/bnx2x: not in enabled drivers build config 00:02:38.234 net/bnxt: not in enabled drivers build config 00:02:38.234 net/bonding: not in enabled drivers build config 00:02:38.234 net/cnxk: not in enabled drivers build config 00:02:38.234 net/cpfl: not in enabled drivers build config 00:02:38.234 net/cxgbe: not in enabled drivers build config 00:02:38.234 net/dpaa: not in enabled drivers build config 00:02:38.234 net/dpaa2: not in enabled drivers build config 00:02:38.234 net/e1000: not in enabled drivers build config 00:02:38.234 net/ena: not in enabled drivers build config 00:02:38.234 net/enetc: not in enabled drivers build config 00:02:38.234 net/enetfec: not in enabled drivers build config 00:02:38.234 net/enic: not in enabled drivers build config 00:02:38.234 net/failsafe: not in enabled drivers build config 00:02:38.234 net/fm10k: not in enabled drivers build config 00:02:38.234 net/gve: not in enabled drivers build config 00:02:38.234 net/hinic: not in enabled drivers build config 00:02:38.234 net/hns3: not in enabled drivers build config 00:02:38.234 net/i40e: not in enabled drivers build config 00:02:38.234 net/iavf: not in enabled drivers build config 00:02:38.234 net/ice: not in enabled drivers build config 00:02:38.234 net/idpf: not in enabled drivers build config 00:02:38.234 net/igc: not in enabled drivers build config 00:02:38.234 net/ionic: not in enabled drivers build config 00:02:38.234 net/ipn3ke: not in enabled drivers build config 00:02:38.234 net/ixgbe: not in enabled drivers build config 00:02:38.234 net/mana: not in enabled drivers build config 00:02:38.234 net/memif: not in enabled drivers build config 00:02:38.234 net/mlx4: not in enabled drivers build config 00:02:38.234 net/mlx5: not in enabled drivers build config 00:02:38.234 net/mvneta: not in enabled drivers build config 00:02:38.234 net/mvpp2: not in enabled drivers build config 00:02:38.234 net/netvsc: not in enabled drivers build config 00:02:38.234 net/nfb: not in enabled drivers build config 00:02:38.234 net/nfp: not in enabled drivers build config 00:02:38.234 net/ngbe: not in enabled drivers build config 00:02:38.234 net/null: not in enabled drivers build config 00:02:38.234 net/octeontx: not in enabled drivers build config 00:02:38.234 net/octeon_ep: not in enabled drivers build config 00:02:38.234 net/pcap: not in enabled drivers build config 00:02:38.234 net/pfe: not in enabled drivers build config 00:02:38.234 net/qede: not in enabled drivers build config 00:02:38.234 net/ring: not in enabled drivers build config 00:02:38.234 net/sfc: not in enabled drivers build config 00:02:38.234 net/softnic: not in enabled drivers build config 00:02:38.234 net/tap: not in enabled drivers build config 00:02:38.234 net/thunderx: not in enabled drivers build config 00:02:38.234 net/txgbe: not in enabled drivers build config 00:02:38.234 net/vdev_netvsc: not in enabled drivers build config 00:02:38.234 net/vhost: not in enabled drivers build config 00:02:38.234 net/virtio: not in enabled drivers build config 00:02:38.234 net/vmxnet3: not in enabled drivers build config 00:02:38.234 raw/*: missing internal dependency, "rawdev" 00:02:38.234 crypto/armv8: not in enabled drivers build config 00:02:38.234 crypto/bcmfs: not in enabled drivers build config 00:02:38.234 crypto/caam_jr: not in enabled drivers build config 00:02:38.234 crypto/ccp: not in enabled drivers build config 00:02:38.234 crypto/cnxk: not in enabled drivers build config 00:02:38.234 crypto/dpaa_sec: not in enabled drivers build config 00:02:38.234 crypto/dpaa2_sec: not in enabled drivers build config 00:02:38.234 crypto/ipsec_mb: not in enabled drivers build config 00:02:38.234 crypto/mlx5: not in enabled drivers build config 00:02:38.234 crypto/mvsam: not in enabled drivers build config 00:02:38.234 crypto/nitrox: not in enabled drivers build config 00:02:38.234 crypto/null: not in enabled drivers build config 00:02:38.234 crypto/octeontx: not in enabled drivers build config 00:02:38.234 crypto/openssl: not in enabled drivers build config 00:02:38.234 crypto/scheduler: not in enabled drivers build config 00:02:38.234 crypto/uadk: not in enabled drivers build config 00:02:38.234 crypto/virtio: not in enabled drivers build config 00:02:38.234 compress/isal: not in enabled drivers build config 00:02:38.234 compress/mlx5: not in enabled drivers build config 00:02:38.234 compress/octeontx: not in enabled drivers build config 00:02:38.234 compress/zlib: not in enabled drivers build config 00:02:38.234 regex/*: missing internal dependency, "regexdev" 00:02:38.234 ml/*: missing internal dependency, "mldev" 00:02:38.234 vdpa/ifc: not in enabled drivers build config 00:02:38.234 vdpa/mlx5: not in enabled drivers build config 00:02:38.234 vdpa/nfp: not in enabled drivers build config 00:02:38.234 vdpa/sfc: not in enabled drivers build config 00:02:38.234 event/*: missing internal dependency, "eventdev" 00:02:38.234 baseband/*: missing internal dependency, "bbdev" 00:02:38.234 gpu/*: missing internal dependency, "gpudev" 00:02:38.234 00:02:38.234 00:02:38.234 Build targets in project: 85 00:02:38.234 00:02:38.234 DPDK 23.11.0 00:02:38.234 00:02:38.234 User defined options 00:02:38.234 buildtype : debug 00:02:38.234 default_library : static 00:02:38.234 libdir : lib 00:02:38.234 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:38.234 b_sanitize : address 00:02:38.234 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:38.234 c_link_args : 00:02:38.234 cpu_instruction_set: native 00:02:38.234 disable_apps : test-pipeline,test-pmd,test-eventdev,test,test-cmdline,test-bbdev,test-sad,proc-info,graph,test-gpudev,test-crypto-perf,test-dma-perf,test-regex,test-mldev,test-acl,test-flow-perf,dumpcap,test-compress-perf,test-security-perf,test-fib,pdump 00:02:38.234 disable_libs : mldev,jobstats,bpf,rawdev,rib,stack,bbdev,lpm,pipeline,member,port,regexdev,latencystats,table,bitratestats,acl,sched,node,graph,gso,dispatcher,efd,eventdev,pdcp,fib,pcapng,cfgfile,metrics,ip_frag,gro,pdump,gpudev,distributor,ipsec 00:02:38.234 enable_docs : false 00:02:38.234 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:38.234 enable_kmods : false 00:02:38.234 tests : false 00:02:38.234 00:02:38.234 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:02:38.234 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:38.234 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:38.234 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:38.234 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:38.234 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:38.234 [5/265] Linking static target lib/librte_kvargs.a 00:02:38.234 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:38.234 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:38.234 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:38.234 [9/265] Linking static target lib/librte_log.a 00:02:38.234 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:38.234 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.234 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:38.234 [13/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.234 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:38.234 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:38.234 [16/265] Linking target lib/librte_log.so.24.0 00:02:38.234 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:38.234 [18/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:38.235 [19/265] Linking static target lib/librte_telemetry.a 00:02:38.235 [20/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:38.235 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:38.235 [22/265] Linking target lib/librte_kvargs.so.24.0 00:02:38.235 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:38.235 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.235 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.235 [26/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:38.235 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:38.235 [28/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.235 [29/265] Linking target lib/librte_telemetry.so.24.0 00:02:38.235 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:38.235 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:38.235 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:38.235 [33/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:38.235 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:38.235 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.235 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:38.235 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:38.235 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:38.235 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:38.235 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:38.235 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:38.235 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:38.235 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:38.493 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.493 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:38.493 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:38.493 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:38.493 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.751 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.751 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:38.752 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.752 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:39.009 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:39.009 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:39.009 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:39.009 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:39.009 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:39.010 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:39.268 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:39.268 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:39.268 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:39.268 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:39.268 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:39.268 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:39.526 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:39.526 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:39.526 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:39.526 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:39.784 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:39.784 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:39.784 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:39.784 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:39.784 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:39.784 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:39.784 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:39.784 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:39.784 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:40.043 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.043 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.043 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:40.043 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:40.300 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:40.300 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.300 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:40.300 [85/265] Linking static target lib/librte_eal.a 00:02:40.300 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.300 [87/265] Linking static target lib/librte_ring.a 00:02:40.559 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:40.559 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:40.559 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:40.559 [91/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.559 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:40.818 [93/265] Linking static target lib/librte_mempool.a 00:02:40.818 [94/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:40.818 [95/265] Linking static target lib/librte_rcu.a 00:02:40.818 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:40.818 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:41.076 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.076 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:41.076 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:41.077 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:41.335 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:41.335 [103/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.335 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:41.592 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:41.592 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:41.592 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:41.592 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:41.592 [109/265] Linking static target lib/librte_mbuf.a 00:02:41.592 [110/265] Linking static target lib/librte_net.a 00:02:41.592 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:41.592 [112/265] Linking static target lib/librte_meter.a 00:02:41.850 [113/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.850 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:41.850 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.850 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:42.108 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:42.108 [118/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.108 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:42.672 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:42.672 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:42.672 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:42.672 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:42.929 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:42.929 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.929 [126/265] Linking static target lib/librte_pci.a 00:02:42.929 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:42.929 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:42.929 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:42.929 [130/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.188 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:43.188 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:43.188 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:43.188 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:43.188 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:43.188 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:43.188 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:43.188 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:43.188 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:43.446 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:43.446 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:43.446 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:43.703 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:43.703 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:43.703 [145/265] Linking static target lib/librte_cmdline.a 00:02:43.960 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:43.960 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:43.960 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.960 [149/265] Linking static target lib/librte_timer.a 00:02:44.217 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.217 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:44.474 [152/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.474 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:44.474 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:44.474 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:44.474 [156/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.474 [157/265] Linking static target lib/librte_ethdev.a 00:02:44.474 [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:44.731 [159/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:44.731 [160/265] Linking static target lib/librte_compressdev.a 00:02:44.731 [161/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:44.731 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:44.731 [163/265] Linking static target lib/librte_hash.a 00:02:44.989 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:44.989 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:44.989 [166/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:44.989 [167/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:45.247 [168/265] Linking static target lib/librte_dmadev.a 00:02:45.247 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:45.247 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.247 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:45.534 [172/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.534 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.534 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:45.534 [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:45.800 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:45.800 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:45.800 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:45.800 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:46.057 [180/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:46.057 [181/265] Linking static target lib/librte_cryptodev.a 00:02:46.057 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:46.057 [183/265] Linking static target lib/librte_power.a 00:02:46.315 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:46.315 [185/265] Linking static target lib/librte_reorder.a 00:02:46.315 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:46.573 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:46.573 [188/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:46.573 [189/265] Linking static target lib/librte_security.a 00:02:46.573 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:46.573 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.832 [192/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.832 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:46.832 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.091 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:47.091 [196/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.349 [197/265] Linking target lib/librte_eal.so.24.0 00:02:47.349 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.349 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:47.349 [200/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:47.349 [201/265] Linking target lib/librte_ring.so.24.0 00:02:47.606 [202/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:47.606 [203/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:47.606 [204/265] Linking target lib/librte_rcu.so.24.0 00:02:47.606 [205/265] Linking target lib/librte_mempool.so.24.0 00:02:47.606 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:47.606 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:47.606 [208/265] Linking target lib/librte_meter.so.24.0 00:02:47.606 [209/265] Linking target lib/librte_pci.so.24.0 00:02:47.606 [210/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:47.606 [211/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:47.606 [212/265] Linking target lib/librte_timer.so.24.0 00:02:47.606 [213/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:47.863 [214/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:47.863 [215/265] Linking target lib/librte_dmadev.so.24.0 00:02:47.863 [216/265] Linking target lib/librte_mbuf.so.24.0 00:02:47.863 [217/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:47.863 [218/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:47.863 [219/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:47.863 [220/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:47.863 [221/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:47.863 [222/265] Linking target lib/librte_net.so.24.0 00:02:47.863 [223/265] Linking target lib/librte_compressdev.so.24.0 00:02:47.863 [224/265] Linking target lib/librte_cryptodev.so.24.0 00:02:47.863 [225/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:47.863 [226/265] Linking target lib/librte_reorder.so.24.0 00:02:48.121 [227/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:48.121 [228/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:48.121 [229/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:48.121 [230/265] Linking target lib/librte_hash.so.24.0 00:02:48.121 [231/265] Linking target lib/librte_cmdline.so.24.0 00:02:48.121 [232/265] Linking target lib/librte_security.so.24.0 00:02:48.379 [233/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:48.379 [234/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:48.379 [235/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:48.379 [236/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:48.379 [237/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:48.637 [238/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:48.637 [239/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.637 [240/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.637 [241/265] Linking static target drivers/librte_bus_vdev.a 00:02:48.637 [242/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:48.637 [243/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.637 [244/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.896 [245/265] Linking static target drivers/librte_bus_pci.a 00:02:48.896 [246/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:48.896 [247/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:48.896 [248/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.896 [249/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:49.158 [250/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:49.158 [251/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.158 [252/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.158 [253/265] Linking static target drivers/librte_mempool_ring.a 00:02:49.158 [254/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:49.415 [255/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.415 [256/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:49.981 [257/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.981 [258/265] Linking target lib/librte_ethdev.so.24.0 00:02:50.239 [259/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:50.239 [260/265] Linking target lib/librte_power.so.24.0 00:02:50.497 [261/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:54.678 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:54.678 [263/265] Linking static target lib/librte_vhost.a 00:02:56.580 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.580 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:56.580 INFO: autodetecting backend as ninja 00:02:56.580 INFO: calculating backend command to run: /var/spdk/dependencies/pip/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:57.516 CC lib/ut/ut.o 00:02:57.516 CC lib/log/log_deprecated.o 00:02:57.516 CC lib/log/log.o 00:02:57.516 CC lib/log/log_flags.o 00:02:57.516 CC lib/ut_mock/mock.o 00:02:57.774 LIB libspdk_ut_mock.a 00:02:57.774 LIB libspdk_log.a 00:02:57.774 LIB libspdk_ut.a 00:02:58.033 CC lib/ioat/ioat.o 00:02:58.033 CXX lib/trace_parser/trace.o 00:02:58.033 CC lib/dma/dma.o 00:02:58.033 CC lib/util/base64.o 00:02:58.033 CC lib/util/bit_array.o 00:02:58.033 CC lib/util/cpuset.o 00:02:58.033 CC lib/util/crc16.o 00:02:58.033 CC lib/util/crc32.o 00:02:58.033 CC lib/util/crc32c.o 00:02:58.033 CC lib/vfio_user/host/vfio_user_pci.o 00:02:58.291 CC lib/vfio_user/host/vfio_user.o 00:02:58.291 CC lib/util/crc32_ieee.o 00:02:58.291 CC lib/util/crc64.o 00:02:58.291 CC lib/util/dif.o 00:02:58.291 CC lib/util/fd.o 00:02:58.291 CC lib/util/file.o 00:02:58.291 LIB libspdk_dma.a 00:02:58.291 CC lib/util/hexlify.o 00:02:58.291 CC lib/util/iov.o 00:02:58.291 CC lib/util/math.o 00:02:58.291 CC lib/util/pipe.o 00:02:58.549 LIB libspdk_vfio_user.a 00:02:58.549 CC lib/util/strerror_tls.o 00:02:58.549 CC lib/util/string.o 00:02:58.549 CC lib/util/uuid.o 00:02:58.549 LIB libspdk_ioat.a 00:02:58.549 CC lib/util/fd_group.o 00:02:58.549 CC lib/util/xor.o 00:02:58.549 CC lib/util/zipf.o 00:02:59.127 LIB libspdk_util.a 00:02:59.127 CC lib/vmd/vmd.o 00:02:59.127 CC lib/vmd/led.o 00:02:59.127 CC lib/conf/conf.o 00:02:59.127 CC lib/json/json_parse.o 00:02:59.127 CC lib/json/json_util.o 00:02:59.127 CC lib/json/json_write.o 00:02:59.127 CC lib/env_dpdk/env.o 00:02:59.127 CC lib/idxd/idxd.o 00:02:59.127 CC lib/rdma/common.o 00:02:59.395 CC lib/rdma/rdma_verbs.o 00:02:59.395 LIB libspdk_conf.a 00:02:59.395 LIB libspdk_trace_parser.a 00:02:59.395 CC lib/env_dpdk/memory.o 00:02:59.395 CC lib/env_dpdk/pci.o 00:02:59.654 CC lib/idxd/idxd_user.o 00:02:59.654 CC lib/idxd/idxd_kernel.o 00:02:59.654 LIB libspdk_json.a 00:02:59.654 CC lib/env_dpdk/init.o 00:02:59.654 LIB libspdk_rdma.a 00:02:59.654 CC lib/env_dpdk/threads.o 00:02:59.654 CC lib/jsonrpc/jsonrpc_server.o 00:02:59.654 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:59.911 CC lib/jsonrpc/jsonrpc_client.o 00:02:59.911 CC lib/env_dpdk/pci_ioat.o 00:02:59.911 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:59.911 CC lib/env_dpdk/pci_virtio.o 00:02:59.911 LIB libspdk_idxd.a 00:02:59.911 CC lib/env_dpdk/pci_vmd.o 00:02:59.911 CC lib/env_dpdk/pci_idxd.o 00:03:00.169 CC lib/env_dpdk/pci_event.o 00:03:00.169 CC lib/env_dpdk/sigbus_handler.o 00:03:00.169 CC lib/env_dpdk/pci_dpdk.o 00:03:00.169 LIB libspdk_vmd.a 00:03:00.169 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.169 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:00.169 LIB libspdk_jsonrpc.a 00:03:00.442 CC lib/rpc/rpc.o 00:03:00.706 LIB libspdk_rpc.a 00:03:00.706 CC lib/trace/trace.o 00:03:00.706 CC lib/trace/trace_flags.o 00:03:00.706 CC lib/trace/trace_rpc.o 00:03:00.706 CC lib/notify/notify.o 00:03:00.706 CC lib/sock/sock.o 00:03:00.706 CC lib/notify/notify_rpc.o 00:03:00.706 CC lib/sock/sock_rpc.o 00:03:00.964 LIB libspdk_notify.a 00:03:00.964 LIB libspdk_trace.a 00:03:01.222 CC lib/thread/thread.o 00:03:01.222 CC lib/thread/iobuf.o 00:03:01.222 LIB libspdk_sock.a 00:03:01.222 LIB libspdk_env_dpdk.a 00:03:01.480 CC lib/nvme/nvme_ctrlr.o 00:03:01.480 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:01.480 CC lib/nvme/nvme_fabric.o 00:03:01.480 CC lib/nvme/nvme_ns_cmd.o 00:03:01.480 CC lib/nvme/nvme_ns.o 00:03:01.480 CC lib/nvme/nvme_pcie_common.o 00:03:01.480 CC lib/nvme/nvme_pcie.o 00:03:01.480 CC lib/nvme/nvme_qpair.o 00:03:01.738 CC lib/nvme/nvme.o 00:03:02.304 CC lib/nvme/nvme_quirks.o 00:03:02.304 CC lib/nvme/nvme_transport.o 00:03:02.304 CC lib/nvme/nvme_discovery.o 00:03:02.562 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:02.562 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:02.562 CC lib/nvme/nvme_tcp.o 00:03:02.820 CC lib/nvme/nvme_opal.o 00:03:02.820 CC lib/nvme/nvme_io_msg.o 00:03:03.078 CC lib/nvme/nvme_poll_group.o 00:03:03.078 CC lib/nvme/nvme_zns.o 00:03:03.078 CC lib/nvme/nvme_cuse.o 00:03:03.336 CC lib/nvme/nvme_vfio_user.o 00:03:03.336 LIB libspdk_thread.a 00:03:03.336 CC lib/nvme/nvme_rdma.o 00:03:03.593 CC lib/accel/accel.o 00:03:03.593 CC lib/blob/blobstore.o 00:03:03.593 CC lib/blob/request.o 00:03:03.851 CC lib/blob/zeroes.o 00:03:03.851 CC lib/blob/blob_bs_dev.o 00:03:03.851 CC lib/accel/accel_rpc.o 00:03:04.109 CC lib/init/json_config.o 00:03:04.109 CC lib/init/subsystem.o 00:03:04.109 CC lib/virtio/virtio.o 00:03:04.109 CC lib/virtio/virtio_vhost_user.o 00:03:04.367 CC lib/virtio/virtio_vfio_user.o 00:03:04.367 CC lib/init/subsystem_rpc.o 00:03:04.367 CC lib/init/rpc.o 00:03:04.625 CC lib/accel/accel_sw.o 00:03:04.625 CC lib/virtio/virtio_pci.o 00:03:04.625 LIB libspdk_init.a 00:03:04.625 CC lib/event/app.o 00:03:04.625 CC lib/event/reactor.o 00:03:04.625 CC lib/event/log_rpc.o 00:03:04.625 CC lib/event/app_rpc.o 00:03:04.625 CC lib/event/scheduler_static.o 00:03:04.883 LIB libspdk_accel.a 00:03:04.883 LIB libspdk_virtio.a 00:03:05.141 CC lib/bdev/bdev_zone.o 00:03:05.141 CC lib/bdev/bdev.o 00:03:05.141 CC lib/bdev/bdev_rpc.o 00:03:05.141 CC lib/bdev/part.o 00:03:05.141 CC lib/bdev/scsi_nvme.o 00:03:05.141 LIB libspdk_nvme.a 00:03:05.141 LIB libspdk_event.a 00:03:07.698 LIB libspdk_blob.a 00:03:07.698 CC lib/blobfs/blobfs.o 00:03:07.698 CC lib/blobfs/tree.o 00:03:07.698 CC lib/lvol/lvol.o 00:03:08.633 LIB libspdk_bdev.a 00:03:08.633 CC lib/scsi/dev.o 00:03:08.633 CC lib/nbd/nbd.o 00:03:08.633 CC lib/nbd/nbd_rpc.o 00:03:08.633 CC lib/scsi/lun.o 00:03:08.633 CC lib/scsi/port.o 00:03:08.633 CC lib/ublk/ublk.o 00:03:08.633 CC lib/ftl/ftl_core.o 00:03:08.633 CC lib/nvmf/ctrlr.o 00:03:08.891 CC lib/ftl/ftl_init.o 00:03:08.891 CC lib/ftl/ftl_layout.o 00:03:08.891 LIB libspdk_blobfs.a 00:03:08.891 CC lib/scsi/scsi.o 00:03:08.891 CC lib/ublk/ublk_rpc.o 00:03:08.891 CC lib/nvmf/ctrlr_discovery.o 00:03:09.149 LIB libspdk_lvol.a 00:03:09.149 CC lib/ftl/ftl_debug.o 00:03:09.149 CC lib/ftl/ftl_io.o 00:03:09.149 CC lib/ftl/ftl_sb.o 00:03:09.149 CC lib/scsi/scsi_bdev.o 00:03:09.149 LIB libspdk_nbd.a 00:03:09.149 CC lib/nvmf/ctrlr_bdev.o 00:03:09.149 CC lib/nvmf/subsystem.o 00:03:09.149 CC lib/ftl/ftl_l2p.o 00:03:09.407 CC lib/scsi/scsi_pr.o 00:03:09.407 CC lib/scsi/scsi_rpc.o 00:03:09.407 CC lib/ftl/ftl_l2p_flat.o 00:03:09.407 CC lib/scsi/task.o 00:03:09.407 LIB libspdk_ublk.a 00:03:09.407 CC lib/nvmf/nvmf.o 00:03:09.665 CC lib/nvmf/nvmf_rpc.o 00:03:09.665 CC lib/ftl/ftl_nv_cache.o 00:03:09.665 CC lib/ftl/ftl_band.o 00:03:09.665 CC lib/nvmf/transport.o 00:03:09.665 CC lib/nvmf/tcp.o 00:03:09.924 LIB libspdk_scsi.a 00:03:09.924 CC lib/nvmf/rdma.o 00:03:10.182 CC lib/ftl/ftl_band_ops.o 00:03:10.182 CC lib/ftl/ftl_writer.o 00:03:10.440 CC lib/ftl/ftl_rq.o 00:03:10.440 CC lib/ftl/ftl_reloc.o 00:03:10.440 CC lib/ftl/ftl_l2p_cache.o 00:03:10.699 CC lib/ftl/ftl_p2l.o 00:03:10.699 CC lib/ftl/mngt/ftl_mngt.o 00:03:10.699 CC lib/iscsi/conn.o 00:03:10.699 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:10.957 CC lib/vhost/vhost.o 00:03:10.957 CC lib/iscsi/init_grp.o 00:03:10.957 CC lib/iscsi/iscsi.o 00:03:10.957 CC lib/vhost/vhost_rpc.o 00:03:10.957 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:10.957 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:11.215 CC lib/iscsi/md5.o 00:03:11.215 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:11.215 CC lib/vhost/vhost_scsi.o 00:03:11.215 CC lib/iscsi/param.o 00:03:11.474 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:11.474 CC lib/iscsi/portal_grp.o 00:03:11.732 CC lib/iscsi/tgt_node.o 00:03:11.732 CC lib/iscsi/iscsi_subsystem.o 00:03:11.732 CC lib/vhost/vhost_blk.o 00:03:11.732 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:11.732 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:11.991 CC lib/iscsi/iscsi_rpc.o 00:03:11.991 CC lib/iscsi/task.o 00:03:11.991 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:11.991 CC lib/vhost/rte_vhost_user.o 00:03:11.991 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.250 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.250 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:12.250 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:12.250 CC lib/ftl/utils/ftl_conf.o 00:03:12.508 CC lib/ftl/utils/ftl_md.o 00:03:12.508 CC lib/ftl/utils/ftl_mempool.o 00:03:12.508 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.509 CC lib/ftl/utils/ftl_property.o 00:03:12.509 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.509 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:12.767 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:12.767 LIB libspdk_nvmf.a 00:03:12.767 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:12.767 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:12.767 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:12.767 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:12.767 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:12.767 LIB libspdk_iscsi.a 00:03:13.026 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:13.026 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:13.026 CC lib/ftl/base/ftl_base_dev.o 00:03:13.026 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.026 CC lib/ftl/ftl_trace.o 00:03:13.285 LIB libspdk_ftl.a 00:03:13.543 LIB libspdk_vhost.a 00:03:13.543 CC module/env_dpdk/env_dpdk_rpc.o 00:03:13.801 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:13.801 CC module/accel/iaa/accel_iaa.o 00:03:13.801 CC module/accel/error/accel_error.o 00:03:13.801 CC module/scheduler/gscheduler/gscheduler.o 00:03:13.801 CC module/blob/bdev/blob_bdev.o 00:03:13.801 CC module/accel/dsa/accel_dsa.o 00:03:13.801 CC module/accel/ioat/accel_ioat.o 00:03:13.801 CC module/sock/posix/posix.o 00:03:13.801 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.801 LIB libspdk_env_dpdk_rpc.a 00:03:13.801 CC module/accel/ioat/accel_ioat_rpc.o 00:03:13.801 LIB libspdk_scheduler_dpdk_governor.a 00:03:13.801 LIB libspdk_scheduler_gscheduler.a 00:03:13.801 CC module/accel/iaa/accel_iaa_rpc.o 00:03:13.801 CC module/accel/dsa/accel_dsa_rpc.o 00:03:13.801 CC module/accel/error/accel_error_rpc.o 00:03:13.801 LIB libspdk_scheduler_dynamic.a 00:03:14.059 LIB libspdk_accel_ioat.a 00:03:14.059 LIB libspdk_blob_bdev.a 00:03:14.059 LIB libspdk_accel_iaa.a 00:03:14.059 LIB libspdk_accel_dsa.a 00:03:14.059 LIB libspdk_accel_error.a 00:03:14.059 CC module/bdev/gpt/gpt.o 00:03:14.059 CC module/bdev/null/bdev_null.o 00:03:14.327 CC module/bdev/delay/vbdev_delay.o 00:03:14.327 CC module/bdev/lvol/vbdev_lvol.o 00:03:14.327 CC module/bdev/malloc/bdev_malloc.o 00:03:14.327 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.327 CC module/bdev/error/vbdev_error.o 00:03:14.327 CC module/bdev/passthru/vbdev_passthru.o 00:03:14.327 CC module/bdev/nvme/bdev_nvme.o 00:03:14.327 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.327 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.587 CC module/bdev/null/bdev_null_rpc.o 00:03:14.587 CC module/bdev/error/vbdev_error_rpc.o 00:03:14.587 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.587 LIB libspdk_blobfs_bdev.a 00:03:14.587 LIB libspdk_sock_posix.a 00:03:14.587 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.587 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:14.845 CC module/bdev/raid/bdev_raid.o 00:03:14.845 LIB libspdk_bdev_error.a 00:03:14.845 LIB libspdk_bdev_null.a 00:03:14.845 LIB libspdk_bdev_gpt.a 00:03:14.845 LIB libspdk_bdev_passthru.a 00:03:14.845 CC module/bdev/split/vbdev_split.o 00:03:14.845 CC module/bdev/split/vbdev_split_rpc.o 00:03:14.845 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:14.845 LIB libspdk_bdev_malloc.a 00:03:14.845 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:14.845 LIB libspdk_bdev_delay.a 00:03:14.845 CC module/bdev/aio/bdev_aio.o 00:03:14.845 CC module/bdev/ftl/bdev_ftl.o 00:03:15.104 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.104 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:15.104 CC module/bdev/iscsi/bdev_iscsi.o 00:03:15.104 LIB libspdk_bdev_split.a 00:03:15.104 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:15.104 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:15.363 LIB libspdk_bdev_lvol.a 00:03:15.363 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:15.363 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.363 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:15.363 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.363 LIB libspdk_bdev_zone_block.a 00:03:15.363 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:15.363 LIB libspdk_bdev_iscsi.a 00:03:15.363 LIB libspdk_bdev_aio.a 00:03:15.363 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.621 LIB libspdk_bdev_ftl.a 00:03:15.621 CC module/bdev/nvme/nvme_rpc.o 00:03:15.621 CC module/bdev/raid/raid0.o 00:03:15.621 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.621 CC module/bdev/nvme/vbdev_opal.o 00:03:15.621 CC module/bdev/raid/raid1.o 00:03:15.621 LIB libspdk_bdev_virtio.a 00:03:15.621 CC module/bdev/raid/concat.o 00:03:15.621 CC module/bdev/raid/raid5f.o 00:03:15.880 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:15.880 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.447 LIB libspdk_bdev_raid.a 00:03:17.016 LIB libspdk_bdev_nvme.a 00:03:17.274 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:17.274 CC module/event/subsystems/iobuf/iobuf.o 00:03:17.275 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:17.275 CC module/event/subsystems/vmd/vmd.o 00:03:17.275 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:17.275 CC module/event/subsystems/scheduler/scheduler.o 00:03:17.275 CC module/event/subsystems/sock/sock.o 00:03:17.537 LIB libspdk_event_sock.a 00:03:17.538 LIB libspdk_event_vhost_blk.a 00:03:17.538 LIB libspdk_event_scheduler.a 00:03:17.538 LIB libspdk_event_vmd.a 00:03:17.538 LIB libspdk_event_iobuf.a 00:03:17.538 CC module/event/subsystems/accel/accel.o 00:03:17.820 LIB libspdk_event_accel.a 00:03:18.090 CC module/event/subsystems/bdev/bdev.o 00:03:18.090 LIB libspdk_event_bdev.a 00:03:18.350 CC module/event/subsystems/nbd/nbd.o 00:03:18.350 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:18.350 CC module/event/subsystems/scsi/scsi.o 00:03:18.350 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:18.350 CC module/event/subsystems/ublk/ublk.o 00:03:18.608 LIB libspdk_event_nbd.a 00:03:18.608 LIB libspdk_event_ublk.a 00:03:18.608 LIB libspdk_event_scsi.a 00:03:18.608 LIB libspdk_event_nvmf.a 00:03:18.608 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:18.608 CC module/event/subsystems/iscsi/iscsi.o 00:03:18.867 LIB libspdk_event_vhost_scsi.a 00:03:18.867 LIB libspdk_event_iscsi.a 00:03:19.126 CC app/trace_record/trace_record.o 00:03:19.126 CXX app/trace/trace.o 00:03:19.126 CC app/iscsi_tgt/iscsi_tgt.o 00:03:19.126 CC app/spdk_tgt/spdk_tgt.o 00:03:19.126 CC examples/accel/perf/accel_perf.o 00:03:19.126 CC app/nvmf_tgt/nvmf_main.o 00:03:19.126 CC test/app/bdev_svc/bdev_svc.o 00:03:19.126 CC examples/blob/hello_world/hello_blob.o 00:03:19.126 CC examples/bdev/hello_world/hello_bdev.o 00:03:19.126 CC test/accel/dif/dif.o 00:03:19.385 LINK nvmf_tgt 00:03:19.385 LINK iscsi_tgt 00:03:19.385 LINK bdev_svc 00:03:19.385 LINK spdk_trace_record 00:03:19.385 LINK spdk_tgt 00:03:19.385 LINK hello_blob 00:03:19.385 LINK hello_bdev 00:03:19.644 LINK spdk_trace 00:03:19.644 LINK dif 00:03:19.903 LINK accel_perf 00:03:19.903 CC examples/bdev/bdevperf/bdevperf.o 00:03:19.903 CC examples/blob/cli/blobcli.o 00:03:20.470 CC app/spdk_lspci/spdk_lspci.o 00:03:20.729 LINK blobcli 00:03:20.729 LINK spdk_lspci 00:03:20.988 LINK bdevperf 00:03:21.247 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:21.247 CC test/app/histogram_perf/histogram_perf.o 00:03:21.247 CC examples/ioat/perf/perf.o 00:03:21.247 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:21.506 LINK histogram_perf 00:03:21.506 CC test/bdev/bdevio/bdevio.o 00:03:21.506 TEST_HEADER include/spdk/accel.h 00:03:21.506 TEST_HEADER include/spdk/accel_module.h 00:03:21.506 TEST_HEADER include/spdk/assert.h 00:03:21.506 TEST_HEADER include/spdk/barrier.h 00:03:21.506 TEST_HEADER include/spdk/base64.h 00:03:21.506 TEST_HEADER include/spdk/bdev.h 00:03:21.506 TEST_HEADER include/spdk/bdev_module.h 00:03:21.506 TEST_HEADER include/spdk/bdev_zone.h 00:03:21.506 TEST_HEADER include/spdk/bit_array.h 00:03:21.506 TEST_HEADER include/spdk/bit_pool.h 00:03:21.506 TEST_HEADER include/spdk/blob.h 00:03:21.506 TEST_HEADER include/spdk/blob_bdev.h 00:03:21.506 TEST_HEADER include/spdk/blobfs.h 00:03:21.506 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:21.506 TEST_HEADER include/spdk/conf.h 00:03:21.506 TEST_HEADER include/spdk/config.h 00:03:21.506 TEST_HEADER include/spdk/cpuset.h 00:03:21.506 TEST_HEADER include/spdk/crc16.h 00:03:21.506 TEST_HEADER include/spdk/crc32.h 00:03:21.506 LINK ioat_perf 00:03:21.506 TEST_HEADER include/spdk/crc64.h 00:03:21.506 TEST_HEADER include/spdk/dif.h 00:03:21.506 TEST_HEADER include/spdk/dma.h 00:03:21.506 TEST_HEADER include/spdk/endian.h 00:03:21.506 CC app/spdk_nvme_perf/perf.o 00:03:21.506 TEST_HEADER include/spdk/env.h 00:03:21.506 TEST_HEADER include/spdk/env_dpdk.h 00:03:21.506 TEST_HEADER include/spdk/event.h 00:03:21.506 TEST_HEADER include/spdk/fd.h 00:03:21.506 TEST_HEADER include/spdk/fd_group.h 00:03:21.506 TEST_HEADER include/spdk/file.h 00:03:21.506 TEST_HEADER include/spdk/ftl.h 00:03:21.506 TEST_HEADER include/spdk/gpt_spec.h 00:03:21.506 TEST_HEADER include/spdk/hexlify.h 00:03:21.506 TEST_HEADER include/spdk/histogram_data.h 00:03:21.506 TEST_HEADER include/spdk/idxd.h 00:03:21.506 CC test/blobfs/mkfs/mkfs.o 00:03:21.506 TEST_HEADER include/spdk/idxd_spec.h 00:03:21.506 TEST_HEADER include/spdk/init.h 00:03:21.506 TEST_HEADER include/spdk/ioat.h 00:03:21.506 TEST_HEADER include/spdk/ioat_spec.h 00:03:21.506 TEST_HEADER include/spdk/iscsi_spec.h 00:03:21.506 TEST_HEADER include/spdk/json.h 00:03:21.506 TEST_HEADER include/spdk/jsonrpc.h 00:03:21.506 TEST_HEADER include/spdk/likely.h 00:03:21.506 TEST_HEADER include/spdk/log.h 00:03:21.506 TEST_HEADER include/spdk/lvol.h 00:03:21.506 TEST_HEADER include/spdk/memory.h 00:03:21.506 TEST_HEADER include/spdk/mmio.h 00:03:21.506 TEST_HEADER include/spdk/nbd.h 00:03:21.506 TEST_HEADER include/spdk/notify.h 00:03:21.506 TEST_HEADER include/spdk/nvme.h 00:03:21.506 TEST_HEADER include/spdk/nvme_intel.h 00:03:21.506 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:21.506 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:21.506 TEST_HEADER include/spdk/nvme_spec.h 00:03:21.765 TEST_HEADER include/spdk/nvme_zns.h 00:03:21.765 TEST_HEADER include/spdk/nvmf.h 00:03:21.765 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:21.765 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:21.765 TEST_HEADER include/spdk/nvmf_spec.h 00:03:21.765 TEST_HEADER include/spdk/nvmf_transport.h 00:03:21.765 TEST_HEADER include/spdk/opal.h 00:03:21.765 TEST_HEADER include/spdk/opal_spec.h 00:03:21.765 TEST_HEADER include/spdk/pci_ids.h 00:03:21.765 TEST_HEADER include/spdk/pipe.h 00:03:21.765 TEST_HEADER include/spdk/queue.h 00:03:21.765 TEST_HEADER include/spdk/reduce.h 00:03:21.765 TEST_HEADER include/spdk/rpc.h 00:03:21.765 TEST_HEADER include/spdk/scheduler.h 00:03:21.765 TEST_HEADER include/spdk/scsi.h 00:03:21.765 TEST_HEADER include/spdk/scsi_spec.h 00:03:21.765 TEST_HEADER include/spdk/sock.h 00:03:21.765 TEST_HEADER include/spdk/stdinc.h 00:03:21.765 TEST_HEADER include/spdk/string.h 00:03:21.765 TEST_HEADER include/spdk/thread.h 00:03:21.765 TEST_HEADER include/spdk/trace.h 00:03:21.765 TEST_HEADER include/spdk/trace_parser.h 00:03:21.765 TEST_HEADER include/spdk/tree.h 00:03:21.765 TEST_HEADER include/spdk/ublk.h 00:03:21.765 TEST_HEADER include/spdk/util.h 00:03:21.765 TEST_HEADER include/spdk/uuid.h 00:03:21.765 TEST_HEADER include/spdk/version.h 00:03:21.765 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:21.765 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:21.765 TEST_HEADER include/spdk/vhost.h 00:03:21.765 TEST_HEADER include/spdk/vmd.h 00:03:21.765 TEST_HEADER include/spdk/xor.h 00:03:21.765 TEST_HEADER include/spdk/zipf.h 00:03:21.765 CXX test/cpp_headers/accel.o 00:03:21.765 LINK nvme_fuzz 00:03:21.765 LINK mkfs 00:03:21.765 CXX test/cpp_headers/accel_module.o 00:03:22.024 LINK bdevio 00:03:22.024 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:22.024 CC examples/ioat/verify/verify.o 00:03:22.024 CXX test/cpp_headers/assert.o 00:03:22.024 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:22.282 CXX test/cpp_headers/barrier.o 00:03:22.282 LINK verify 00:03:22.282 CC examples/nvme/hello_world/hello_world.o 00:03:22.540 CXX test/cpp_headers/base64.o 00:03:22.540 CC examples/nvme/reconnect/reconnect.o 00:03:22.540 LINK hello_world 00:03:22.540 CXX test/cpp_headers/bdev.o 00:03:22.540 CXX test/cpp_headers/bdev_module.o 00:03:22.799 LINK vhost_fuzz 00:03:22.799 LINK spdk_nvme_perf 00:03:22.799 CXX test/cpp_headers/bdev_zone.o 00:03:22.799 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.057 CC examples/nvme/arbitration/arbitration.o 00:03:23.057 CXX test/cpp_headers/bit_array.o 00:03:23.057 LINK reconnect 00:03:23.316 CXX test/cpp_headers/bit_pool.o 00:03:23.316 CC examples/nvme/hotplug/hotplug.o 00:03:23.574 CC app/spdk_nvme_identify/identify.o 00:03:23.574 CC test/app/jsoncat/jsoncat.o 00:03:23.574 CC test/app/stub/stub.o 00:03:23.832 CC examples/sock/hello_world/hello_sock.o 00:03:23.832 CXX test/cpp_headers/blob.o 00:03:23.832 LINK jsoncat 00:03:23.832 LINK arbitration 00:03:23.832 LINK nvme_manage 00:03:23.832 LINK hotplug 00:03:23.832 LINK stub 00:03:24.090 CXX test/cpp_headers/blob_bdev.o 00:03:24.090 CC examples/vmd/lsvmd/lsvmd.o 00:03:24.090 LINK iscsi_fuzz 00:03:24.090 LINK hello_sock 00:03:24.348 LINK lsvmd 00:03:24.348 CXX test/cpp_headers/blobfs.o 00:03:24.348 CC test/dma/test_dma/test_dma.o 00:03:24.606 CC app/spdk_nvme_discover/discovery_aer.o 00:03:24.606 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:24.606 CC examples/nvme/abort/abort.o 00:03:24.606 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.864 CXX test/cpp_headers/conf.o 00:03:24.864 CXX test/cpp_headers/config.o 00:03:24.864 LINK test_dma 00:03:24.864 LINK spdk_nvme_identify 00:03:24.864 CC examples/vmd/led/led.o 00:03:24.864 LINK spdk_nvme_discover 00:03:24.864 LINK cmb_copy 00:03:24.864 CXX test/cpp_headers/cpuset.o 00:03:24.864 CC test/env/mem_callbacks/mem_callbacks.o 00:03:25.123 LINK led 00:03:25.123 CC test/event/event_perf/event_perf.o 00:03:25.123 CC test/event/reactor/reactor.o 00:03:25.123 CXX test/cpp_headers/crc16.o 00:03:25.123 LINK reactor 00:03:25.123 LINK event_perf 00:03:25.123 LINK abort 00:03:25.381 CXX test/cpp_headers/crc32.o 00:03:25.381 CXX test/cpp_headers/crc64.o 00:03:25.638 CXX test/cpp_headers/dif.o 00:03:25.638 LINK mem_callbacks 00:03:25.638 CC app/spdk_top/spdk_top.o 00:03:25.638 CC test/event/reactor_perf/reactor_perf.o 00:03:25.638 CC test/event/app_repeat/app_repeat.o 00:03:25.896 CXX test/cpp_headers/dma.o 00:03:25.896 CC test/event/scheduler/scheduler.o 00:03:25.896 CC examples/util/zipf/zipf.o 00:03:25.896 LINK reactor_perf 00:03:25.896 LINK app_repeat 00:03:25.896 CC examples/nvmf/nvmf/nvmf.o 00:03:25.896 CC test/env/vtophys/vtophys.o 00:03:25.896 CXX test/cpp_headers/endian.o 00:03:26.154 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:26.154 LINK zipf 00:03:26.154 LINK scheduler 00:03:26.154 LINK vtophys 00:03:26.154 CXX test/cpp_headers/env.o 00:03:26.154 LINK pmr_persistence 00:03:26.154 LINK nvmf 00:03:26.413 CXX test/cpp_headers/env_dpdk.o 00:03:26.413 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:26.671 CC examples/thread/thread/thread_ex.o 00:03:26.671 CXX test/cpp_headers/event.o 00:03:26.671 CC examples/idxd/perf/perf.o 00:03:26.671 LINK env_dpdk_post_init 00:03:26.671 CXX test/cpp_headers/fd.o 00:03:26.671 CXX test/cpp_headers/fd_group.o 00:03:26.930 LINK spdk_top 00:03:26.930 LINK thread 00:03:26.930 CXX test/cpp_headers/file.o 00:03:26.930 CC test/lvol/esnap/esnap.o 00:03:26.930 CC test/nvme/aer/aer.o 00:03:26.930 CC test/rpc_client/rpc_client_test.o 00:03:26.930 LINK idxd_perf 00:03:27.188 CXX test/cpp_headers/ftl.o 00:03:27.188 LINK rpc_client_test 00:03:27.188 CXX test/cpp_headers/gpt_spec.o 00:03:27.447 LINK aer 00:03:27.447 CC app/vhost/vhost.o 00:03:27.447 CC test/env/memory/memory_ut.o 00:03:27.447 CXX test/cpp_headers/hexlify.o 00:03:27.447 CXX test/cpp_headers/histogram_data.o 00:03:27.706 LINK vhost 00:03:27.706 CXX test/cpp_headers/idxd.o 00:03:27.706 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:27.706 CC test/thread/poller_perf/poller_perf.o 00:03:27.706 CC test/thread/lock/spdk_lock.o 00:03:27.964 CXX test/cpp_headers/idxd_spec.o 00:03:27.964 LINK poller_perf 00:03:27.964 LINK interrupt_tgt 00:03:28.223 CXX test/cpp_headers/init.o 00:03:28.223 CC test/nvme/reset/reset.o 00:03:28.223 CC app/spdk_dd/spdk_dd.o 00:03:28.223 CXX test/cpp_headers/ioat.o 00:03:28.482 LINK reset 00:03:28.482 CXX test/cpp_headers/ioat_spec.o 00:03:28.482 LINK memory_ut 00:03:28.741 CC app/fio/nvme/fio_plugin.o 00:03:28.741 CXX test/cpp_headers/iscsi_spec.o 00:03:28.741 LINK spdk_dd 00:03:28.741 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:28.741 CC test/env/pci/pci_ut.o 00:03:28.741 CXX test/cpp_headers/json.o 00:03:28.999 LINK histogram_ut 00:03:28.999 CC test/nvme/sgl/sgl.o 00:03:29.258 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:29.258 CC test/nvme/e2edp/nvme_dp.o 00:03:29.258 CXX test/cpp_headers/jsonrpc.o 00:03:29.258 LINK pci_ut 00:03:29.516 LINK sgl 00:03:29.516 LINK spdk_nvme 00:03:29.516 CC app/fio/bdev/fio_plugin.o 00:03:29.516 CXX test/cpp_headers/likely.o 00:03:29.775 LINK nvme_dp 00:03:29.775 CXX test/cpp_headers/log.o 00:03:30.032 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:30.032 LINK spdk_bdev 00:03:30.032 CXX test/cpp_headers/lvol.o 00:03:30.032 LINK spdk_lock 00:03:30.290 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:30.290 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:30.290 CXX test/cpp_headers/memory.o 00:03:30.549 CXX test/cpp_headers/mmio.o 00:03:30.549 CC test/nvme/overhead/overhead.o 00:03:30.549 LINK tree_ut 00:03:30.549 CC test/nvme/err_injection/err_injection.o 00:03:30.549 CXX test/cpp_headers/nbd.o 00:03:30.549 CXX test/cpp_headers/notify.o 00:03:30.807 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:30.807 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:30.807 LINK err_injection 00:03:30.807 CXX test/cpp_headers/nvme.o 00:03:30.807 LINK overhead 00:03:31.066 LINK blob_bdev_ut 00:03:31.066 CXX test/cpp_headers/nvme_intel.o 00:03:31.324 CXX test/cpp_headers/nvme_ocssd.o 00:03:31.324 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:31.324 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:31.582 CXX test/cpp_headers/nvme_spec.o 00:03:31.582 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:31.582 CC test/nvme/startup/startup.o 00:03:31.846 CXX test/cpp_headers/nvme_zns.o 00:03:31.846 CC test/nvme/reserve/reserve.o 00:03:31.846 LINK startup 00:03:31.846 LINK blobfs_bdev_ut 00:03:31.846 CXX test/cpp_headers/nvmf.o 00:03:32.124 LINK reserve 00:03:32.124 CXX test/cpp_headers/nvmf_cmd.o 00:03:32.124 LINK accel_ut 00:03:32.124 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:32.390 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:32.390 LINK blobfs_sync_ut 00:03:32.390 LINK blobfs_async_ut 00:03:32.390 CXX test/cpp_headers/nvmf_spec.o 00:03:32.390 CXX test/cpp_headers/nvmf_transport.o 00:03:32.649 CXX test/cpp_headers/opal.o 00:03:32.649 CC test/nvme/simple_copy/simple_copy.o 00:03:32.649 CC test/nvme/connect_stress/connect_stress.o 00:03:32.907 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:32.907 CC test/unit/lib/event/app.c/app_ut.o 00:03:32.907 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:32.907 CXX test/cpp_headers/opal_spec.o 00:03:32.907 LINK connect_stress 00:03:32.907 LINK simple_copy 00:03:32.907 CXX test/cpp_headers/pci_ids.o 00:03:33.165 CXX test/cpp_headers/pipe.o 00:03:33.165 LINK dma_ut 00:03:33.423 CXX test/cpp_headers/queue.o 00:03:33.423 CXX test/cpp_headers/reduce.o 00:03:33.423 CC test/nvme/boot_partition/boot_partition.o 00:03:33.423 LINK esnap 00:03:33.423 LINK app_ut 00:03:33.681 CXX test/cpp_headers/rpc.o 00:03:33.681 LINK boot_partition 00:03:33.681 CC test/nvme/compliance/nvme_compliance.o 00:03:33.681 CXX test/cpp_headers/scheduler.o 00:03:33.681 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:33.681 CXX test/cpp_headers/scsi.o 00:03:33.940 LINK reactor_ut 00:03:33.940 CXX test/cpp_headers/scsi_spec.o 00:03:33.940 CXX test/cpp_headers/sock.o 00:03:33.940 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:34.199 CXX test/cpp_headers/stdinc.o 00:03:34.199 LINK nvme_compliance 00:03:34.199 CC test/nvme/fused_ordering/fused_ordering.o 00:03:34.199 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:34.199 CXX test/cpp_headers/string.o 00:03:34.199 LINK ioat_ut 00:03:34.456 CXX test/cpp_headers/thread.o 00:03:34.456 LINK fused_ordering 00:03:34.456 LINK doorbell_aers 00:03:34.456 CC test/nvme/fdp/fdp.o 00:03:34.456 CC test/nvme/cuse/cuse.o 00:03:34.456 CXX test/cpp_headers/trace.o 00:03:34.714 CXX test/cpp_headers/trace_parser.o 00:03:34.973 LINK fdp 00:03:34.973 CXX test/cpp_headers/tree.o 00:03:34.973 CXX test/cpp_headers/ublk.o 00:03:34.973 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:35.231 CXX test/cpp_headers/util.o 00:03:35.231 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:35.231 LINK scsi_nvme_ut 00:03:35.231 CXX test/cpp_headers/uuid.o 00:03:35.231 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:35.231 LINK conn_ut 00:03:35.489 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:35.489 CXX test/cpp_headers/version.o 00:03:35.489 CXX test/cpp_headers/vfio_user_pci.o 00:03:35.489 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:35.489 CXX test/cpp_headers/vfio_user_spec.o 00:03:35.748 LINK gpt_ut 00:03:35.748 LINK cuse 00:03:35.748 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:35.748 CXX test/cpp_headers/vhost.o 00:03:36.007 CXX test/cpp_headers/vmd.o 00:03:36.007 CXX test/cpp_headers/xor.o 00:03:36.007 LINK init_grp_ut 00:03:36.007 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:36.007 CXX test/cpp_headers/zipf.o 00:03:36.265 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:36.265 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:36.265 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:36.524 LINK part_ut 00:03:36.524 LINK bdev_ut 00:03:36.783 LINK jsonrpc_server_ut 00:03:36.783 LINK vbdev_lvol_ut 00:03:36.783 LINK json_util_ut 00:03:36.783 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:36.783 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:37.041 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:37.041 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:37.041 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:37.300 LINK json_write_ut 00:03:37.300 LINK bdev_zone_ut 00:03:37.559 LINK bdev_raid_sb_ut 00:03:37.559 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:37.559 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:37.559 LINK raid1_ut 00:03:37.559 LINK concat_ut 00:03:37.818 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:37.818 CC test/unit/lib/log/log.c/log_ut.o 00:03:37.818 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:38.386 LINK log_ut 00:03:38.386 LINK json_parse_ut 00:03:38.386 LINK vbdev_zone_block_ut 00:03:38.386 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:38.645 LINK param_ut 00:03:38.645 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:38.645 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:38.645 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:38.645 LINK raid5f_ut 00:03:38.904 LINK notify_ut 00:03:38.904 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:39.163 LINK iscsi_ut 00:03:39.163 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:39.163 LINK bdev_raid_ut 00:03:39.422 LINK portal_grp_ut 00:03:39.422 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:39.422 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:39.680 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:39.939 LINK tgt_node_ut 00:03:39.939 LINK dev_ut 00:03:39.939 LINK blob_ut 00:03:40.198 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:40.198 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:40.198 LINK bdev_ut 00:03:40.456 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:40.456 LINK nvme_ut 00:03:40.715 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:40.715 LINK lvol_ut 00:03:40.715 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:40.974 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:40.974 LINK iobuf_ut 00:03:40.974 LINK lun_ut 00:03:41.235 LINK sock_ut 00:03:41.235 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:41.235 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:41.493 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:41.493 LINK scsi_ut 00:03:41.751 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:42.316 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:42.316 LINK nvme_ns_ut 00:03:42.574 LINK thread_ut 00:03:42.574 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:42.574 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:42.862 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:42.862 LINK nvme_ctrlr_cmd_ut 00:03:42.862 LINK posix_ut 00:03:43.120 LINK scsi_bdev_ut 00:03:43.120 LINK base64_ut 00:03:43.120 LINK scsi_pr_ut 00:03:43.120 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:43.378 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:43.378 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:43.378 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:43.378 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:43.378 LINK bdev_nvme_ut 00:03:43.637 LINK pci_event_ut 00:03:43.637 LINK cpuset_ut 00:03:43.896 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:43.896 LINK bit_array_ut 00:03:43.896 LINK tcp_ut 00:03:43.896 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:43.896 LINK subsystem_ut 00:03:43.896 LINK ctrlr_ut 00:03:43.896 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:44.154 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:44.154 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:44.154 LINK crc16_ut 00:03:44.154 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:44.413 LINK rpc_ut 00:03:44.413 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:44.413 LINK crc32_ieee_ut 00:03:44.413 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:44.413 LINK idxd_user_ut 00:03:44.671 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:44.671 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:44.671 LINK crc32c_ut 00:03:44.671 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:44.671 LINK nvme_ctrlr_ut 00:03:44.671 LINK crc64_ut 00:03:44.929 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:44.929 LINK common_ut 00:03:44.929 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:44.929 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:44.929 LINK ftl_l2p_ut 00:03:45.187 LINK nvme_ns_cmd_ut 00:03:45.188 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:45.188 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:45.188 LINK nvme_ns_ocssd_cmd_ut 00:03:45.188 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:45.446 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:45.446 LINK ftl_bitmap_ut 00:03:45.704 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:45.704 LINK idxd_ut 00:03:45.704 LINK ftl_mempool_ut 00:03:45.704 LINK ftl_io_ut 00:03:45.962 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:45.962 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:45.962 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:46.221 LINK dif_ut 00:03:46.221 LINK ftl_band_ut 00:03:46.479 LINK vhost_ut 00:03:46.479 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:46.479 LINK nvme_poll_group_ut 00:03:46.738 CC test/unit/lib/util/math.c/math_ut.o 00:03:46.738 LINK ftl_mngt_ut 00:03:46.738 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:46.738 LINK iov_ut 00:03:46.738 CC test/unit/lib/util/string.c/string_ut.o 00:03:46.738 LINK math_ut 00:03:46.997 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:46.997 LINK subsystem_ut 00:03:46.997 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:46.997 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:46.997 LINK nvme_pcie_ut 00:03:46.997 LINK nvme_qpair_ut 00:03:47.255 LINK string_ut 00:03:47.255 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:47.255 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:47.255 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:47.255 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:47.514 LINK pipe_ut 00:03:47.514 LINK xor_ut 00:03:47.773 LINK ftl_sb_ut 00:03:47.773 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:47.773 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:47.773 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:48.031 LINK ctrlr_discovery_ut 00:03:48.031 LINK nvme_quirks_ut 00:03:48.031 LINK ctrlr_bdev_ut 00:03:48.289 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:48.289 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:48.289 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:48.289 LINK nvmf_ut 00:03:48.548 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:48.807 LINK nvme_io_msg_ut 00:03:48.807 LINK nvme_transport_ut 00:03:48.807 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:49.374 LINK nvme_opal_ut 00:03:49.374 LINK nvme_fabric_ut 00:03:49.374 LINK ftl_layout_upgrade_ut 00:03:49.942 LINK nvme_pcie_common_ut 00:03:50.529 LINK nvme_tcp_ut 00:03:50.787 LINK nvme_cuse_ut 00:03:50.787 LINK transport_ut 00:03:51.356 LINK nvme_rdma_ut 00:03:51.614 LINK rdma_ut 00:03:51.873 00:03:51.873 real 2m0.803s 00:03:51.873 user 9m58.283s 00:03:51.873 sys 2m9.268s 00:03:51.873 05:01:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:51.873 05:01:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.873 ************************************ 00:03:51.873 END TEST unittest_build 00:03:51.873 ************************************ 00:03:51.873 05:01:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:51.873 05:01:10 -- nvmf/common.sh@7 -- # uname -s 00:03:51.873 05:01:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:51.873 05:01:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:51.873 05:01:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:51.873 05:01:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:51.873 05:01:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:51.873 05:01:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:51.873 05:01:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:51.873 05:01:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:51.873 05:01:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:51.873 05:01:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:51.873 05:01:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:daa6cca6-b131-412d-ad3d-d3aef57713f9 00:03:51.873 05:01:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=daa6cca6-b131-412d-ad3d-d3aef57713f9 00:03:51.873 05:01:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:51.873 05:01:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:51.873 05:01:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:51.873 05:01:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:51.873 05:01:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:51.873 05:01:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.873 05:01:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.873 05:01:10 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:51.873 05:01:10 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:51.873 05:01:10 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:51.873 05:01:10 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:51.873 05:01:10 -- paths/export.sh@6 -- # export PATH 00:03:51.873 05:01:10 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:51.873 05:01:10 -- nvmf/common.sh@46 -- # : 0 00:03:51.874 05:01:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:51.874 05:01:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:51.874 05:01:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:51.874 05:01:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:51.874 05:01:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:51.874 05:01:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:51.874 05:01:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:51.874 05:01:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:51.874 05:01:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:51.874 05:01:10 -- spdk/autotest.sh@32 -- # uname -s 00:03:51.874 05:01:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:51.874 05:01:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:51.874 05:01:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:51.874 05:01:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:51.874 05:01:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:51.874 05:01:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:52.132 05:01:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:52.133 05:01:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:52.133 05:01:11 -- spdk/autotest.sh@48 -- # udevadm_pid=51384 00:03:52.133 05:01:11 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:52.133 05:01:11 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:52.133 05:01:11 -- spdk/autotest.sh@54 -- # echo 51392 00:03:52.133 05:01:11 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:52.133 05:01:11 -- spdk/autotest.sh@56 -- # echo 51393 00:03:52.133 05:01:11 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:52.133 05:01:11 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:52.133 05:01:11 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:52.133 05:01:11 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:52.133 05:01:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:52.133 05:01:11 -- common/autotest_common.sh@10 -- # set +x 00:03:52.133 05:01:11 -- spdk/autotest.sh@70 -- # create_test_list 00:03:52.133 05:01:11 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:52.133 05:01:11 -- common/autotest_common.sh@10 -- # set +x 00:03:52.133 05:01:11 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:52.133 05:01:11 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:52.133 05:01:11 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:52.133 05:01:11 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:52.133 05:01:11 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:52.133 05:01:11 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:52.133 05:01:11 -- common/autotest_common.sh@1440 -- # uname 00:03:52.133 05:01:11 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:52.133 05:01:11 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:52.133 05:01:11 -- common/autotest_common.sh@1460 -- # uname 00:03:52.133 05:01:11 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:52.133 05:01:11 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:52.133 05:01:11 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:52.133 05:01:11 -- spdk/autotest.sh@83 -- # hash lcov 00:03:52.133 05:01:11 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:52.133 05:01:11 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:52.133 --rc lcov_branch_coverage=1 00:03:52.133 --rc lcov_function_coverage=1 00:03:52.133 --rc genhtml_branch_coverage=1 00:03:52.133 --rc genhtml_function_coverage=1 00:03:52.133 --rc genhtml_legend=1 00:03:52.133 --rc geninfo_all_blocks=1 00:03:52.133 ' 00:03:52.133 05:01:11 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:52.133 --rc lcov_branch_coverage=1 00:03:52.133 --rc lcov_function_coverage=1 00:03:52.133 --rc genhtml_branch_coverage=1 00:03:52.133 --rc genhtml_function_coverage=1 00:03:52.133 --rc genhtml_legend=1 00:03:52.133 --rc geninfo_all_blocks=1 00:03:52.133 ' 00:03:52.133 05:01:11 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:52.133 --rc lcov_branch_coverage=1 00:03:52.133 --rc lcov_function_coverage=1 00:03:52.133 --rc genhtml_branch_coverage=1 00:03:52.133 --rc genhtml_function_coverage=1 00:03:52.133 --rc genhtml_legend=1 00:03:52.133 --rc geninfo_all_blocks=1 00:03:52.133 --no-external' 00:03:52.133 05:01:11 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:52.133 --rc lcov_branch_coverage=1 00:03:52.133 --rc lcov_function_coverage=1 00:03:52.133 --rc genhtml_branch_coverage=1 00:03:52.133 --rc genhtml_function_coverage=1 00:03:52.133 --rc genhtml_legend=1 00:03:52.133 --rc geninfo_all_blocks=1 00:03:52.133 --no-external' 00:03:52.133 05:01:11 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:52.133 lcov: LCOV version 1.15 00:03:52.133 05:01:11 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:04.333 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:04.333 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:04.333 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:04.333 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:04.333 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:04.333 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:36.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:36.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:36.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:36.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:44.584 05:02:03 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:44.584 05:02:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:44.584 05:02:03 -- common/autotest_common.sh@10 -- # set +x 00:04:44.584 05:02:03 -- spdk/autotest.sh@102 -- # rm -f 00:04:44.584 05:02:03 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.842 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:44.842 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:44.842 05:02:03 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:44.842 05:02:03 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:44.842 05:02:03 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:44.842 05:02:03 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:44.842 05:02:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:44.842 05:02:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:44.842 05:02:03 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:44.842 05:02:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:44.842 05:02:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:44.842 05:02:03 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:44.842 05:02:03 -- spdk/autotest.sh@121 -- # grep -v p 00:04:44.842 05:02:03 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:44.842 05:02:03 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:44.842 05:02:03 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:44.842 05:02:03 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:44.842 05:02:03 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:44.842 05:02:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:45.100 No valid GPT data, bailing 00:04:45.100 05:02:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:45.100 05:02:03 -- scripts/common.sh@393 -- # pt= 00:04:45.100 05:02:03 -- scripts/common.sh@394 -- # return 1 00:04:45.100 05:02:03 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:45.100 1+0 records in 00:04:45.100 1+0 records out 00:04:45.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433291 s, 242 MB/s 00:04:45.100 05:02:03 -- spdk/autotest.sh@129 -- # sync 00:04:45.100 05:02:04 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:45.100 05:02:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:45.100 05:02:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:46.473 05:02:05 -- spdk/autotest.sh@135 -- # uname -s 00:04:46.473 05:02:05 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:46.473 05:02:05 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:46.473 05:02:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.473 05:02:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.473 05:02:05 -- common/autotest_common.sh@10 -- # set +x 00:04:46.473 ************************************ 00:04:46.473 START TEST setup.sh 00:04:46.474 ************************************ 00:04:46.474 05:02:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:46.474 * Looking for test storage... 00:04:46.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:46.474 05:02:05 -- setup/test-setup.sh@10 -- # uname -s 00:04:46.474 05:02:05 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:46.474 05:02:05 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:46.474 05:02:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.474 05:02:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.474 05:02:05 -- common/autotest_common.sh@10 -- # set +x 00:04:46.474 ************************************ 00:04:46.474 START TEST acl 00:04:46.474 ************************************ 00:04:46.474 05:02:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:46.731 * Looking for test storage... 00:04:46.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:46.731 05:02:05 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:46.731 05:02:05 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:46.731 05:02:05 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:46.731 05:02:05 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:46.731 05:02:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.731 05:02:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:46.731 05:02:05 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:46.731 05:02:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.731 05:02:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.731 05:02:05 -- setup/acl.sh@12 -- # devs=() 00:04:46.731 05:02:05 -- setup/acl.sh@12 -- # declare -a devs 00:04:46.731 05:02:05 -- setup/acl.sh@13 -- # drivers=() 00:04:46.731 05:02:05 -- setup/acl.sh@13 -- # declare -A drivers 00:04:46.731 05:02:05 -- setup/acl.sh@51 -- # setup reset 00:04:46.731 05:02:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.731 05:02:05 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.989 05:02:06 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:46.989 05:02:06 -- setup/acl.sh@16 -- # local dev driver 00:04:46.989 05:02:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.989 05:02:06 -- setup/acl.sh@15 -- # setup output status 00:04:46.989 05:02:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.989 05:02:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:47.254 Hugepages 00:04:47.254 node hugesize free / total 00:04:47.254 05:02:06 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:47.254 05:02:06 -- setup/acl.sh@19 -- # continue 00:04:47.254 05:02:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:47.254 00:04:47.254 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:47.254 05:02:06 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:47.254 05:02:06 -- setup/acl.sh@19 -- # continue 00:04:47.254 05:02:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:47.254 05:02:06 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:47.254 05:02:06 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:47.254 05:02:06 -- setup/acl.sh@20 -- # continue 00:04:47.254 05:02:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:47.531 05:02:06 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:47.531 05:02:06 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:47.531 05:02:06 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:47.531 05:02:06 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:47.531 05:02:06 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:47.531 05:02:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:47.531 05:02:06 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:47.531 05:02:06 -- setup/acl.sh@54 -- # run_test denied denied 00:04:47.531 05:02:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.531 05:02:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.531 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:04:47.531 ************************************ 00:04:47.531 START TEST denied 00:04:47.531 ************************************ 00:04:47.531 05:02:06 -- common/autotest_common.sh@1104 -- # denied 00:04:47.531 05:02:06 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:47.531 05:02:06 -- setup/acl.sh@38 -- # setup output config 00:04:47.531 05:02:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.531 05:02:06 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:47.531 05:02:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.463 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:48.463 05:02:07 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:48.463 05:02:07 -- setup/acl.sh@28 -- # local dev driver 00:04:48.463 05:02:07 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:48.463 05:02:07 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:48.463 05:02:07 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:48.463 05:02:07 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:48.463 05:02:07 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:48.463 05:02:07 -- setup/acl.sh@41 -- # setup reset 00:04:48.463 05:02:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.463 05:02:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.721 00:04:48.721 real 0m1.395s 00:04:48.721 user 0m0.382s 00:04:48.721 sys 0m1.077s 00:04:48.721 05:02:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.721 05:02:07 -- common/autotest_common.sh@10 -- # set +x 00:04:48.721 ************************************ 00:04:48.721 END TEST denied 00:04:48.721 ************************************ 00:04:48.980 05:02:07 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:48.980 05:02:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.980 05:02:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.980 05:02:07 -- common/autotest_common.sh@10 -- # set +x 00:04:48.980 ************************************ 00:04:48.980 START TEST allowed 00:04:48.980 ************************************ 00:04:48.980 05:02:07 -- common/autotest_common.sh@1104 -- # allowed 00:04:48.980 05:02:07 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:48.980 05:02:07 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:48.980 05:02:07 -- setup/acl.sh@45 -- # setup output config 00:04:48.980 05:02:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.980 05:02:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.914 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.914 05:02:08 -- setup/acl.sh@47 -- # verify 00:04:49.914 05:02:08 -- setup/acl.sh@28 -- # local dev driver 00:04:49.914 05:02:08 -- setup/acl.sh@48 -- # setup reset 00:04:49.914 05:02:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.914 05:02:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.482 00:04:50.482 real 0m1.470s 00:04:50.482 user 0m0.331s 00:04:50.482 sys 0m1.186s 00:04:50.482 05:02:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.482 ************************************ 00:04:50.482 END TEST allowed 00:04:50.482 05:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:50.482 ************************************ 00:04:50.482 ************************************ 00:04:50.482 END TEST acl 00:04:50.482 ************************************ 00:04:50.482 00:04:50.482 real 0m3.820s 00:04:50.482 user 0m1.067s 00:04:50.482 sys 0m2.921s 00:04:50.482 05:02:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.482 05:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:50.482 05:02:09 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:50.482 05:02:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.482 05:02:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.482 05:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:50.482 ************************************ 00:04:50.482 START TEST hugepages 00:04:50.482 ************************************ 00:04:50.482 05:02:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:50.482 * Looking for test storage... 00:04:50.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:50.482 05:02:09 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:50.482 05:02:09 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:50.482 05:02:09 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:50.482 05:02:09 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:50.482 05:02:09 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:50.482 05:02:09 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:50.482 05:02:09 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:50.482 05:02:09 -- setup/common.sh@18 -- # local node= 00:04:50.482 05:02:09 -- setup/common.sh@19 -- # local var val 00:04:50.482 05:02:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.482 05:02:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.482 05:02:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.482 05:02:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.482 05:02:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.482 05:02:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.482 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 05:02:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 2947068 kB' 'MemAvailable: 7324884 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 396260 kB' 'Inactive: 4231144 kB' 'Active(anon): 108860 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231144 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 126364 kB' 'Mapped: 58484 kB' 'Shmem: 2600 kB' 'KReclaimable: 181024 kB' 'Slab: 260672 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79648 kB' 'KernelStack: 5036 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4026004 kB' 'Committed_AS: 366928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:50.482 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 05:02:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.482 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.482 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 05:02:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.482 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.482 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.483 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # continue 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.484 05:02:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.484 05:02:09 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.484 05:02:09 -- setup/common.sh@33 -- # echo 2048 00:04:50.484 05:02:09 -- setup/common.sh@33 -- # return 0 00:04:50.484 05:02:09 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:50.484 05:02:09 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:50.484 05:02:09 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:50.484 05:02:09 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:50.484 05:02:09 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:50.484 05:02:09 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:50.484 05:02:09 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:50.484 05:02:09 -- setup/hugepages.sh@207 -- # get_nodes 00:04:50.484 05:02:09 -- setup/hugepages.sh@27 -- # local node 00:04:50.484 05:02:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.484 05:02:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:50.484 05:02:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:50.484 05:02:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.484 05:02:09 -- setup/hugepages.sh@208 -- # clear_hp 00:04:50.484 05:02:09 -- setup/hugepages.sh@37 -- # local node hp 00:04:50.484 05:02:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.484 05:02:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.484 05:02:09 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.484 05:02:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.484 05:02:09 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.484 05:02:09 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:50.484 05:02:09 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:50.484 05:02:09 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:50.484 05:02:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.484 05:02:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.484 05:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:50.484 ************************************ 00:04:50.484 START TEST default_setup 00:04:50.484 ************************************ 00:04:50.484 05:02:09 -- common/autotest_common.sh@1104 -- # default_setup 00:04:50.484 05:02:09 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:50.484 05:02:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:50.484 05:02:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:50.484 05:02:09 -- setup/hugepages.sh@51 -- # shift 00:04:50.484 05:02:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:50.484 05:02:09 -- setup/hugepages.sh@52 -- # local node_ids 00:04:50.484 05:02:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.484 05:02:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:50.484 05:02:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:50.484 05:02:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:50.484 05:02:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.484 05:02:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.484 05:02:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:50.484 05:02:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.484 05:02:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.484 05:02:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:50.484 05:02:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:50.484 05:02:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:50.484 05:02:09 -- setup/hugepages.sh@73 -- # return 0 00:04:50.484 05:02:09 -- setup/hugepages.sh@137 -- # setup output 00:04:50.484 05:02:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.484 05:02:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.051 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:51.051 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.313 05:02:10 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:51.313 05:02:10 -- setup/hugepages.sh@89 -- # local node 00:04:51.313 05:02:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.313 05:02:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.313 05:02:10 -- setup/hugepages.sh@92 -- # local surp 00:04:51.313 05:02:10 -- setup/hugepages.sh@93 -- # local resv 00:04:51.313 05:02:10 -- setup/hugepages.sh@94 -- # local anon 00:04:51.313 05:02:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.313 05:02:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.313 05:02:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.313 05:02:10 -- setup/common.sh@18 -- # local node= 00:04:51.313 05:02:10 -- setup/common.sh@19 -- # local var val 00:04:51.313 05:02:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.313 05:02:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.313 05:02:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.313 05:02:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.313 05:02:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.313 05:02:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5005276 kB' 'MemAvailable: 9383100 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 412168 kB' 'Inactive: 4231152 kB' 'Active(anon): 124768 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142340 kB' 'Mapped: 58408 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260664 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79640 kB' 'KernelStack: 5056 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.313 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.313 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.314 05:02:10 -- setup/common.sh@33 -- # echo 0 00:04:51.314 05:02:10 -- setup/common.sh@33 -- # return 0 00:04:51.314 05:02:10 -- setup/hugepages.sh@97 -- # anon=0 00:04:51.314 05:02:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.314 05:02:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.314 05:02:10 -- setup/common.sh@18 -- # local node= 00:04:51.314 05:02:10 -- setup/common.sh@19 -- # local var val 00:04:51.314 05:02:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.314 05:02:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.314 05:02:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.314 05:02:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.314 05:02:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.314 05:02:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5005528 kB' 'MemAvailable: 9383352 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 411788 kB' 'Inactive: 4231152 kB' 'Active(anon): 124388 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 141936 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260664 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79640 kB' 'KernelStack: 5040 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.314 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.314 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.315 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.315 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.316 05:02:10 -- setup/common.sh@33 -- # echo 0 00:04:51.316 05:02:10 -- setup/common.sh@33 -- # return 0 00:04:51.316 05:02:10 -- setup/hugepages.sh@99 -- # surp=0 00:04:51.316 05:02:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.316 05:02:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.316 05:02:10 -- setup/common.sh@18 -- # local node= 00:04:51.316 05:02:10 -- setup/common.sh@19 -- # local var val 00:04:51.316 05:02:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.316 05:02:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.316 05:02:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.316 05:02:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.316 05:02:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.316 05:02:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5005788 kB' 'MemAvailable: 9383612 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 411788 kB' 'Inactive: 4231152 kB' 'Active(anon): 124388 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142196 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260664 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79640 kB' 'KernelStack: 5040 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.316 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.316 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.317 05:02:10 -- setup/common.sh@33 -- # echo 0 00:04:51.317 05:02:10 -- setup/common.sh@33 -- # return 0 00:04:51.317 05:02:10 -- setup/hugepages.sh@100 -- # resv=0 00:04:51.317 05:02:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.317 nr_hugepages=1024 00:04:51.317 resv_hugepages=0 00:04:51.317 05:02:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.317 surplus_hugepages=0 00:04:51.317 05:02:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.317 anon_hugepages=0 00:04:51.317 05:02:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.317 05:02:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.317 05:02:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.317 05:02:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.317 05:02:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.317 05:02:10 -- setup/common.sh@18 -- # local node= 00:04:51.317 05:02:10 -- setup/common.sh@19 -- # local var val 00:04:51.317 05:02:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.317 05:02:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.317 05:02:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.317 05:02:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.317 05:02:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.317 05:02:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5007640 kB' 'MemAvailable: 9385464 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 411708 kB' 'Inactive: 4231152 kB' 'Active(anon): 124308 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 141860 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260660 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79636 kB' 'KernelStack: 5008 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.317 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.317 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.318 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.318 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.319 05:02:10 -- setup/common.sh@33 -- # echo 1024 00:04:51.319 05:02:10 -- setup/common.sh@33 -- # return 0 00:04:51.319 05:02:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.319 05:02:10 -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.319 05:02:10 -- setup/hugepages.sh@27 -- # local node 00:04:51.319 05:02:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.319 05:02:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:51.319 05:02:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:51.319 05:02:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.319 05:02:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.319 05:02:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.319 05:02:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.319 05:02:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.319 05:02:10 -- setup/common.sh@18 -- # local node=0 00:04:51.319 05:02:10 -- setup/common.sh@19 -- # local var val 00:04:51.319 05:02:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.319 05:02:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.319 05:02:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.319 05:02:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.319 05:02:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.319 05:02:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5007640 kB' 'MemUsed: 7238676 kB' 'SwapCached: 0 kB' 'Active: 411968 kB' 'Inactive: 4231152 kB' 'Active(anon): 124568 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 4529892 kB' 'Mapped: 58404 kB' 'AnonPages: 142120 kB' 'Shmem: 2592 kB' 'KernelStack: 5008 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181024 kB' 'Slab: 260660 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.319 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.319 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # continue 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.320 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.320 05:02:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.320 05:02:10 -- setup/common.sh@33 -- # echo 0 00:04:51.320 05:02:10 -- setup/common.sh@33 -- # return 0 00:04:51.320 05:02:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.320 05:02:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.320 05:02:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.320 05:02:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.320 node0=1024 expecting 1024 00:04:51.320 05:02:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:51.320 05:02:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:51.320 00:04:51.320 real 0m0.876s 00:04:51.320 user 0m0.268s 00:04:51.320 sys 0m0.609s 00:04:51.320 05:02:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.320 05:02:10 -- common/autotest_common.sh@10 -- # set +x 00:04:51.320 ************************************ 00:04:51.320 END TEST default_setup 00:04:51.320 ************************************ 00:04:51.579 05:02:10 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:51.579 05:02:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.579 05:02:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.579 05:02:10 -- common/autotest_common.sh@10 -- # set +x 00:04:51.579 ************************************ 00:04:51.579 START TEST per_node_1G_alloc 00:04:51.579 ************************************ 00:04:51.579 05:02:10 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:51.579 05:02:10 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:51.579 05:02:10 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:51.579 05:02:10 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:51.579 05:02:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:51.579 05:02:10 -- setup/hugepages.sh@51 -- # shift 00:04:51.579 05:02:10 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:51.579 05:02:10 -- setup/hugepages.sh@52 -- # local node_ids 00:04:51.579 05:02:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.579 05:02:10 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:51.579 05:02:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:51.579 05:02:10 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:51.579 05:02:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.579 05:02:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:51.579 05:02:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:51.579 05:02:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.579 05:02:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.579 05:02:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:51.579 05:02:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.579 05:02:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:51.579 05:02:10 -- setup/hugepages.sh@73 -- # return 0 00:04:51.579 05:02:10 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:51.579 05:02:10 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:51.579 05:02:10 -- setup/hugepages.sh@146 -- # setup output 00:04:51.579 05:02:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.579 05:02:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:51.838 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:52.100 05:02:10 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:52.100 05:02:10 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:52.100 05:02:10 -- setup/hugepages.sh@89 -- # local node 00:04:52.100 05:02:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.100 05:02:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.100 05:02:10 -- setup/hugepages.sh@92 -- # local surp 00:04:52.100 05:02:10 -- setup/hugepages.sh@93 -- # local resv 00:04:52.100 05:02:10 -- setup/hugepages.sh@94 -- # local anon 00:04:52.100 05:02:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.100 05:02:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.100 05:02:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.100 05:02:10 -- setup/common.sh@18 -- # local node= 00:04:52.100 05:02:10 -- setup/common.sh@19 -- # local var val 00:04:52.100 05:02:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.100 05:02:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.100 05:02:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.100 05:02:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.100 05:02:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.100 05:02:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.100 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.100 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.100 05:02:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 6058376 kB' 'MemAvailable: 10436200 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 412236 kB' 'Inactive: 4231152 kB' 'Active(anon): 124836 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142348 kB' 'Mapped: 58416 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260688 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79664 kB' 'KernelStack: 5056 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598868 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20136 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:52.100 05:02:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.100 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.100 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.100 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.100 05:02:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.100 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.100 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.101 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.101 05:02:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.101 05:02:10 -- setup/common.sh@33 -- # echo 0 00:04:52.101 05:02:10 -- setup/common.sh@33 -- # return 0 00:04:52.101 05:02:10 -- setup/hugepages.sh@97 -- # anon=0 00:04:52.101 05:02:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.102 05:02:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.102 05:02:10 -- setup/common.sh@18 -- # local node= 00:04:52.102 05:02:10 -- setup/common.sh@19 -- # local var val 00:04:52.102 05:02:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.102 05:02:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.102 05:02:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.102 05:02:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.102 05:02:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.102 05:02:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 6058628 kB' 'MemAvailable: 10436452 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 412048 kB' 'Inactive: 4231152 kB' 'Active(anon): 124648 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142212 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260684 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79660 kB' 'KernelStack: 5040 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598868 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20136 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.102 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.102 05:02:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:10 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.103 05:02:11 -- setup/common.sh@33 -- # echo 0 00:04:52.103 05:02:11 -- setup/common.sh@33 -- # return 0 00:04:52.103 05:02:11 -- setup/hugepages.sh@99 -- # surp=0 00:04:52.103 05:02:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.103 05:02:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.103 05:02:11 -- setup/common.sh@18 -- # local node= 00:04:52.103 05:02:11 -- setup/common.sh@19 -- # local var val 00:04:52.103 05:02:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.103 05:02:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.103 05:02:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.103 05:02:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.103 05:02:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.103 05:02:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 6058916 kB' 'MemAvailable: 10436740 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 412076 kB' 'Inactive: 4231152 kB' 'Active(anon): 124676 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142208 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260684 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79660 kB' 'KernelStack: 5040 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598868 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20136 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.103 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.103 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.104 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.104 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.104 05:02:11 -- setup/common.sh@33 -- # echo 0 00:04:52.104 05:02:11 -- setup/common.sh@33 -- # return 0 00:04:52.104 05:02:11 -- setup/hugepages.sh@100 -- # resv=0 00:04:52.104 nr_hugepages=512 00:04:52.104 05:02:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:52.104 resv_hugepages=0 00:04:52.104 05:02:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.104 surplus_hugepages=0 00:04:52.104 05:02:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.104 anon_hugepages=0 00:04:52.104 05:02:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.104 05:02:11 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:52.104 05:02:11 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:52.104 05:02:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.104 05:02:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.104 05:02:11 -- setup/common.sh@18 -- # local node= 00:04:52.104 05:02:11 -- setup/common.sh@19 -- # local var val 00:04:52.104 05:02:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.104 05:02:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.104 05:02:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.104 05:02:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.104 05:02:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.105 05:02:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 6059544 kB' 'MemAvailable: 10437368 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 412096 kB' 'Inactive: 4231152 kB' 'Active(anon): 124696 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142028 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260684 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79660 kB' 'KernelStack: 5024 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598868 kB' 'Committed_AS: 381584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.105 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.105 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.106 05:02:11 -- setup/common.sh@33 -- # echo 512 00:04:52.106 05:02:11 -- setup/common.sh@33 -- # return 0 00:04:52.106 05:02:11 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:52.106 05:02:11 -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.106 05:02:11 -- setup/hugepages.sh@27 -- # local node 00:04:52.106 05:02:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.106 05:02:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.106 05:02:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:52.106 05:02:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.106 05:02:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.106 05:02:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.106 05:02:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.106 05:02:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.106 05:02:11 -- setup/common.sh@18 -- # local node=0 00:04:52.106 05:02:11 -- setup/common.sh@19 -- # local var val 00:04:52.106 05:02:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.106 05:02:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.106 05:02:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.106 05:02:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.106 05:02:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.106 05:02:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 6059840 kB' 'MemUsed: 6186476 kB' 'SwapCached: 0 kB' 'Active: 411532 kB' 'Inactive: 4231152 kB' 'Active(anon): 124132 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 4529892 kB' 'Mapped: 58404 kB' 'AnonPages: 141676 kB' 'Shmem: 2592 kB' 'KernelStack: 4992 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181024 kB' 'Slab: 260684 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.106 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.106 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.107 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.107 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.107 05:02:11 -- setup/common.sh@33 -- # echo 0 00:04:52.107 05:02:11 -- setup/common.sh@33 -- # return 0 00:04:52.107 05:02:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.107 05:02:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.107 05:02:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.107 05:02:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.107 node0=512 expecting 512 00:04:52.107 05:02:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:52.107 05:02:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:52.107 00:04:52.107 real 0m0.614s 00:04:52.107 user 0m0.239s 00:04:52.107 sys 0m0.417s 00:04:52.107 05:02:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.107 05:02:11 -- common/autotest_common.sh@10 -- # set +x 00:04:52.107 ************************************ 00:04:52.107 END TEST per_node_1G_alloc 00:04:52.107 ************************************ 00:04:52.107 05:02:11 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:52.107 05:02:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.107 05:02:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.107 05:02:11 -- common/autotest_common.sh@10 -- # set +x 00:04:52.107 ************************************ 00:04:52.107 START TEST even_2G_alloc 00:04:52.107 ************************************ 00:04:52.107 05:02:11 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:52.107 05:02:11 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:52.107 05:02:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.107 05:02:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.107 05:02:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.107 05:02:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.107 05:02:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.107 05:02:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.107 05:02:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.107 05:02:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.107 05:02:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:52.107 05:02:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.107 05:02:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.107 05:02:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.107 05:02:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.107 05:02:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.107 05:02:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:52.107 05:02:11 -- setup/hugepages.sh@83 -- # : 0 00:04:52.107 05:02:11 -- setup/hugepages.sh@84 -- # : 0 00:04:52.107 05:02:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.107 05:02:11 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:52.107 05:02:11 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:52.107 05:02:11 -- setup/hugepages.sh@153 -- # setup output 00:04:52.107 05:02:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.107 05:02:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:52.365 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:52.935 05:02:11 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:52.935 05:02:11 -- setup/hugepages.sh@89 -- # local node 00:04:52.935 05:02:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.935 05:02:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.935 05:02:11 -- setup/hugepages.sh@92 -- # local surp 00:04:52.935 05:02:11 -- setup/hugepages.sh@93 -- # local resv 00:04:52.935 05:02:11 -- setup/hugepages.sh@94 -- # local anon 00:04:52.935 05:02:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.935 05:02:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.935 05:02:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.935 05:02:11 -- setup/common.sh@18 -- # local node= 00:04:52.935 05:02:11 -- setup/common.sh@19 -- # local var val 00:04:52.935 05:02:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.935 05:02:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.935 05:02:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.935 05:02:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.935 05:02:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.935 05:02:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.935 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.935 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.935 05:02:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5020340 kB' 'MemAvailable: 9398164 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 411984 kB' 'Inactive: 4231152 kB' 'Active(anon): 124584 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142096 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260700 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79676 kB' 'KernelStack: 5056 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20152 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.936 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.936 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.937 05:02:11 -- setup/common.sh@33 -- # echo 0 00:04:52.937 05:02:11 -- setup/common.sh@33 -- # return 0 00:04:52.937 05:02:11 -- setup/hugepages.sh@97 -- # anon=0 00:04:52.937 05:02:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.937 05:02:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.937 05:02:11 -- setup/common.sh@18 -- # local node= 00:04:52.937 05:02:11 -- setup/common.sh@19 -- # local var val 00:04:52.937 05:02:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.937 05:02:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.937 05:02:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.937 05:02:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.937 05:02:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.937 05:02:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5021008 kB' 'MemAvailable: 9398832 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 412172 kB' 'Inactive: 4231152 kB' 'Active(anon): 124772 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142320 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260692 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79668 kB' 'KernelStack: 5040 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.937 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.937 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.938 05:02:11 -- setup/common.sh@33 -- # echo 0 00:04:52.938 05:02:11 -- setup/common.sh@33 -- # return 0 00:04:52.938 05:02:11 -- setup/hugepages.sh@99 -- # surp=0 00:04:52.938 05:02:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.938 05:02:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.938 05:02:11 -- setup/common.sh@18 -- # local node= 00:04:52.938 05:02:11 -- setup/common.sh@19 -- # local var val 00:04:52.938 05:02:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.938 05:02:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.938 05:02:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.938 05:02:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.938 05:02:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.938 05:02:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5021008 kB' 'MemAvailable: 9398832 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 412004 kB' 'Inactive: 4231152 kB' 'Active(anon): 124604 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142116 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260692 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79668 kB' 'KernelStack: 5008 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.938 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.938 05:02:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.939 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.939 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.939 05:02:11 -- setup/common.sh@33 -- # echo 0 00:04:52.939 05:02:11 -- setup/common.sh@33 -- # return 0 00:04:52.939 nr_hugepages=1024 00:04:52.939 resv_hugepages=0 00:04:52.939 surplus_hugepages=0 00:04:52.939 anon_hugepages=0 00:04:52.939 05:02:11 -- setup/hugepages.sh@100 -- # resv=0 00:04:52.939 05:02:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.939 05:02:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.939 05:02:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.939 05:02:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.939 05:02:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.939 05:02:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.939 05:02:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.939 05:02:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.939 05:02:11 -- setup/common.sh@18 -- # local node= 00:04:52.939 05:02:11 -- setup/common.sh@19 -- # local var val 00:04:52.939 05:02:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.939 05:02:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.939 05:02:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.939 05:02:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.939 05:02:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.940 05:02:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5021008 kB' 'MemAvailable: 9398832 kB' 'Buffers: 35112 kB' 'Cached: 4494780 kB' 'SwapCached: 0 kB' 'Active: 412068 kB' 'Inactive: 4231152 kB' 'Active(anon): 124668 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142224 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181024 kB' 'Slab: 260692 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79668 kB' 'KernelStack: 5040 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.940 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.940 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.941 05:02:11 -- setup/common.sh@33 -- # echo 1024 00:04:52.941 05:02:11 -- setup/common.sh@33 -- # return 0 00:04:52.941 05:02:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.941 05:02:11 -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.941 05:02:11 -- setup/hugepages.sh@27 -- # local node 00:04:52.941 05:02:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.941 05:02:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.941 05:02:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:52.941 05:02:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.941 05:02:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.941 05:02:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.941 05:02:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.941 05:02:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.941 05:02:11 -- setup/common.sh@18 -- # local node=0 00:04:52.941 05:02:11 -- setup/common.sh@19 -- # local var val 00:04:52.941 05:02:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.941 05:02:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.941 05:02:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.941 05:02:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.941 05:02:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.941 05:02:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5021008 kB' 'MemUsed: 7225308 kB' 'SwapCached: 0 kB' 'Active: 412092 kB' 'Inactive: 4231152 kB' 'Active(anon): 124692 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231152 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 4529892 kB' 'Mapped: 58404 kB' 'AnonPages: 142244 kB' 'Shmem: 2592 kB' 'KernelStack: 5040 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181024 kB' 'Slab: 260692 kB' 'SReclaimable: 181024 kB' 'SUnreclaim: 79668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.941 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.941 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # continue 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.942 05:02:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.942 05:02:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.942 05:02:11 -- setup/common.sh@33 -- # echo 0 00:04:52.942 05:02:11 -- setup/common.sh@33 -- # return 0 00:04:52.942 05:02:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.942 05:02:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.942 node0=1024 expecting 1024 00:04:52.942 ************************************ 00:04:52.942 END TEST even_2G_alloc 00:04:52.942 ************************************ 00:04:52.942 05:02:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.942 05:02:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.942 05:02:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.942 05:02:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.942 00:04:52.942 real 0m0.747s 00:04:52.942 user 0m0.242s 00:04:52.942 sys 0m0.533s 00:04:52.942 05:02:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.942 05:02:11 -- common/autotest_common.sh@10 -- # set +x 00:04:52.942 05:02:11 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:52.942 05:02:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.942 05:02:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.942 05:02:11 -- common/autotest_common.sh@10 -- # set +x 00:04:52.942 ************************************ 00:04:52.942 START TEST odd_alloc 00:04:52.942 ************************************ 00:04:52.942 05:02:11 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:52.942 05:02:11 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:52.942 05:02:11 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:52.942 05:02:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.942 05:02:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.942 05:02:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:52.942 05:02:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.942 05:02:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.942 05:02:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.942 05:02:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:52.942 05:02:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:52.942 05:02:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.942 05:02:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.942 05:02:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.942 05:02:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.942 05:02:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.942 05:02:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:52.942 05:02:11 -- setup/hugepages.sh@83 -- # : 0 00:04:52.942 05:02:11 -- setup/hugepages.sh@84 -- # : 0 00:04:52.942 05:02:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.942 05:02:11 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:52.942 05:02:11 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:52.942 05:02:11 -- setup/hugepages.sh@160 -- # setup output 00:04:52.942 05:02:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.942 05:02:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:53.201 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.461 05:02:12 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:53.461 05:02:12 -- setup/hugepages.sh@89 -- # local node 00:04:53.461 05:02:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.461 05:02:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.461 05:02:12 -- setup/hugepages.sh@92 -- # local surp 00:04:53.461 05:02:12 -- setup/hugepages.sh@93 -- # local resv 00:04:53.461 05:02:12 -- setup/hugepages.sh@94 -- # local anon 00:04:53.461 05:02:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.461 05:02:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.461 05:02:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.461 05:02:12 -- setup/common.sh@18 -- # local node= 00:04:53.461 05:02:12 -- setup/common.sh@19 -- # local var val 00:04:53.461 05:02:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.461 05:02:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.461 05:02:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.461 05:02:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.461 05:02:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.461 05:02:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.461 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.461 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.461 05:02:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5026732 kB' 'MemAvailable: 9404576 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 411988 kB' 'Inactive: 4231156 kB' 'Active(anon): 124588 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142380 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181040 kB' 'Slab: 260708 kB' 'SReclaimable: 181040 kB' 'SUnreclaim: 79668 kB' 'KernelStack: 5056 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073556 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20136 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:53.461 05:02:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.461 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.461 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.461 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.461 05:02:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.461 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.461 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.461 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.461 05:02:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.461 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.461 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.462 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.462 05:02:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.462 05:02:12 -- setup/common.sh@33 -- # echo 0 00:04:53.462 05:02:12 -- setup/common.sh@33 -- # return 0 00:04:53.462 05:02:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:53.725 05:02:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.725 05:02:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.725 05:02:12 -- setup/common.sh@18 -- # local node= 00:04:53.725 05:02:12 -- setup/common.sh@19 -- # local var val 00:04:53.725 05:02:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.725 05:02:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.725 05:02:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.725 05:02:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.725 05:02:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.725 05:02:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5027324 kB' 'MemAvailable: 9405168 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 411804 kB' 'Inactive: 4231156 kB' 'Active(anon): 124404 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142200 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181040 kB' 'Slab: 260700 kB' 'SReclaimable: 181040 kB' 'SUnreclaim: 79660 kB' 'KernelStack: 5024 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073556 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.725 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.725 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.726 05:02:12 -- setup/common.sh@33 -- # echo 0 00:04:53.726 05:02:12 -- setup/common.sh@33 -- # return 0 00:04:53.726 05:02:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:53.726 05:02:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.726 05:02:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.726 05:02:12 -- setup/common.sh@18 -- # local node= 00:04:53.726 05:02:12 -- setup/common.sh@19 -- # local var val 00:04:53.726 05:02:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.726 05:02:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.726 05:02:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.726 05:02:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.726 05:02:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.726 05:02:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5027324 kB' 'MemAvailable: 9405168 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 411772 kB' 'Inactive: 4231156 kB' 'Active(anon): 124372 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 141900 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181040 kB' 'Slab: 260696 kB' 'SReclaimable: 181040 kB' 'SUnreclaim: 79656 kB' 'KernelStack: 5008 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073556 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.726 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.726 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.727 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.727 05:02:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.727 05:02:12 -- setup/common.sh@33 -- # echo 0 00:04:53.727 05:02:12 -- setup/common.sh@33 -- # return 0 00:04:53.727 05:02:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:53.727 nr_hugepages=1025 00:04:53.727 05:02:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:53.727 resv_hugepages=0 00:04:53.727 05:02:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.727 surplus_hugepages=0 00:04:53.727 05:02:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.727 anon_hugepages=0 00:04:53.727 05:02:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.727 05:02:12 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:53.727 05:02:12 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:53.727 05:02:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.727 05:02:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.728 05:02:12 -- setup/common.sh@18 -- # local node= 00:04:53.728 05:02:12 -- setup/common.sh@19 -- # local var val 00:04:53.728 05:02:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.728 05:02:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.728 05:02:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.728 05:02:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.728 05:02:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.728 05:02:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5027324 kB' 'MemAvailable: 9405168 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 412000 kB' 'Inactive: 4231156 kB' 'Active(anon): 124600 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142128 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181040 kB' 'Slab: 260696 kB' 'SReclaimable: 181040 kB' 'SUnreclaim: 79656 kB' 'KernelStack: 4992 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073556 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20136 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.728 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.728 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.729 05:02:12 -- setup/common.sh@33 -- # echo 1025 00:04:53.729 05:02:12 -- setup/common.sh@33 -- # return 0 00:04:53.729 05:02:12 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:53.729 05:02:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.729 05:02:12 -- setup/hugepages.sh@27 -- # local node 00:04:53.729 05:02:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.729 05:02:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:53.729 05:02:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:53.729 05:02:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.729 05:02:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.729 05:02:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.729 05:02:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.729 05:02:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.729 05:02:12 -- setup/common.sh@18 -- # local node=0 00:04:53.729 05:02:12 -- setup/common.sh@19 -- # local var val 00:04:53.729 05:02:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.729 05:02:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.729 05:02:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.729 05:02:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.729 05:02:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.729 05:02:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5027324 kB' 'MemUsed: 7218992 kB' 'SwapCached: 0 kB' 'Active: 411936 kB' 'Inactive: 4231156 kB' 'Active(anon): 124536 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 4529896 kB' 'Mapped: 58404 kB' 'AnonPages: 142032 kB' 'Shmem: 2592 kB' 'KernelStack: 5028 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181040 kB' 'Slab: 260696 kB' 'SReclaimable: 181040 kB' 'SUnreclaim: 79656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.729 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.729 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # continue 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.730 05:02:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.730 05:02:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.730 05:02:12 -- setup/common.sh@33 -- # echo 0 00:04:53.730 05:02:12 -- setup/common.sh@33 -- # return 0 00:04:53.730 05:02:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.730 05:02:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.730 05:02:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.730 05:02:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.730 node0=1025 expecting 1025 00:04:53.730 05:02:12 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:53.730 05:02:12 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:53.730 00:04:53.730 real 0m0.751s 00:04:53.730 user 0m0.230s 00:04:53.730 sys 0m0.562s 00:04:53.730 05:02:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.730 05:02:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.730 ************************************ 00:04:53.730 END TEST odd_alloc 00:04:53.730 ************************************ 00:04:53.730 05:02:12 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:53.730 05:02:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.730 05:02:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.730 05:02:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.730 ************************************ 00:04:53.730 START TEST custom_alloc 00:04:53.730 ************************************ 00:04:53.730 05:02:12 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:53.730 05:02:12 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:53.730 05:02:12 -- setup/hugepages.sh@169 -- # local node 00:04:53.730 05:02:12 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:53.730 05:02:12 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:53.730 05:02:12 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:53.730 05:02:12 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:53.730 05:02:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:53.730 05:02:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:53.730 05:02:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.730 05:02:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:53.730 05:02:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:53.730 05:02:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.730 05:02:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.730 05:02:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:53.730 05:02:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.730 05:02:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.730 05:02:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.730 05:02:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.730 05:02:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:53.730 05:02:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.730 05:02:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:53.730 05:02:12 -- setup/hugepages.sh@83 -- # : 0 00:04:53.730 05:02:12 -- setup/hugepages.sh@84 -- # : 0 00:04:53.730 05:02:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.730 05:02:12 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:53.730 05:02:12 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:53.730 05:02:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:53.730 05:02:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:53.730 05:02:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:53.730 05:02:12 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:53.730 05:02:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.730 05:02:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.730 05:02:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:53.730 05:02:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.731 05:02:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.731 05:02:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.731 05:02:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.731 05:02:12 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:53.731 05:02:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:53.731 05:02:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:53.731 05:02:12 -- setup/hugepages.sh@78 -- # return 0 00:04:53.731 05:02:12 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:53.731 05:02:12 -- setup/hugepages.sh@187 -- # setup output 00:04:53.731 05:02:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.731 05:02:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:53.990 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.252 05:02:13 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:54.252 05:02:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:54.252 05:02:13 -- setup/hugepages.sh@89 -- # local node 00:04:54.252 05:02:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.252 05:02:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.252 05:02:13 -- setup/hugepages.sh@92 -- # local surp 00:04:54.252 05:02:13 -- setup/hugepages.sh@93 -- # local resv 00:04:54.252 05:02:13 -- setup/hugepages.sh@94 -- # local anon 00:04:54.252 05:02:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.252 05:02:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.252 05:02:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.252 05:02:13 -- setup/common.sh@18 -- # local node= 00:04:54.252 05:02:13 -- setup/common.sh@19 -- # local var val 00:04:54.252 05:02:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.252 05:02:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.252 05:02:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.252 05:02:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.252 05:02:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.252 05:02:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.252 05:02:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 6084952 kB' 'MemAvailable: 10462796 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 412024 kB' 'Inactive: 4231156 kB' 'Active(anon): 124624 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 141984 kB' 'Mapped: 58408 kB' 'Shmem: 2592 kB' 'KReclaimable: 181040 kB' 'Slab: 260684 kB' 'SReclaimable: 181040 kB' 'SUnreclaim: 79644 kB' 'KernelStack: 5040 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598868 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.252 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.252 05:02:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.253 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.253 05:02:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.253 05:02:13 -- setup/common.sh@33 -- # echo 0 00:04:54.253 05:02:13 -- setup/common.sh@33 -- # return 0 00:04:54.253 05:02:13 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.253 05:02:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.253 05:02:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.253 05:02:13 -- setup/common.sh@18 -- # local node= 00:04:54.253 05:02:13 -- setup/common.sh@19 -- # local var val 00:04:54.253 05:02:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.253 05:02:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.253 05:02:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.254 05:02:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.254 05:02:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.254 05:02:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 6084952 kB' 'MemAvailable: 10462796 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 411928 kB' 'Inactive: 4231156 kB' 'Active(anon): 124528 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142396 kB' 'Mapped: 58408 kB' 'Shmem: 2592 kB' 'KReclaimable: 181040 kB' 'Slab: 260684 kB' 'SReclaimable: 181040 kB' 'SUnreclaim: 79644 kB' 'KernelStack: 5040 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598868 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.254 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.254 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.255 05:02:13 -- setup/common.sh@33 -- # echo 0 00:04:54.255 05:02:13 -- setup/common.sh@33 -- # return 0 00:04:54.255 05:02:13 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.255 05:02:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.255 05:02:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.255 05:02:13 -- setup/common.sh@18 -- # local node= 00:04:54.255 05:02:13 -- setup/common.sh@19 -- # local var val 00:04:54.255 05:02:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.255 05:02:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.255 05:02:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.255 05:02:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.255 05:02:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.255 05:02:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 6084952 kB' 'MemAvailable: 10462796 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 411760 kB' 'Inactive: 4231156 kB' 'Active(anon): 124360 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 141944 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181040 kB' 'Slab: 260684 kB' 'SReclaimable: 181040 kB' 'SUnreclaim: 79644 kB' 'KernelStack: 5024 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598868 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.255 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.255 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.256 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.256 05:02:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.256 05:02:13 -- setup/common.sh@33 -- # echo 0 00:04:54.256 05:02:13 -- setup/common.sh@33 -- # return 0 00:04:54.256 05:02:13 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.256 nr_hugepages=512 00:04:54.256 05:02:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:54.256 resv_hugepages=0 00:04:54.256 05:02:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.256 surplus_hugepages=0 00:04:54.256 05:02:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.256 anon_hugepages=0 00:04:54.257 05:02:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.257 05:02:13 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:54.257 05:02:13 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:54.257 05:02:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.257 05:02:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.257 05:02:13 -- setup/common.sh@18 -- # local node= 00:04:54.257 05:02:13 -- setup/common.sh@19 -- # local var val 00:04:54.257 05:02:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.257 05:02:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.257 05:02:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.257 05:02:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.257 05:02:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.257 05:02:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 6084952 kB' 'MemAvailable: 10462796 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 411760 kB' 'Inactive: 4231156 kB' 'Active(anon): 124360 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 142204 kB' 'Mapped: 58404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181040 kB' 'Slab: 260684 kB' 'SReclaimable: 181040 kB' 'SUnreclaim: 79644 kB' 'KernelStack: 5024 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598868 kB' 'Committed_AS: 381972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.257 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.257 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.258 05:02:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.258 05:02:13 -- setup/common.sh@33 -- # echo 512 00:04:54.258 05:02:13 -- setup/common.sh@33 -- # return 0 00:04:54.258 05:02:13 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:54.258 05:02:13 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.258 05:02:13 -- setup/hugepages.sh@27 -- # local node 00:04:54.258 05:02:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.258 05:02:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.258 05:02:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.258 05:02:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.258 05:02:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.258 05:02:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.258 05:02:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.258 05:02:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.258 05:02:13 -- setup/common.sh@18 -- # local node=0 00:04:54.258 05:02:13 -- setup/common.sh@19 -- # local var val 00:04:54.258 05:02:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.258 05:02:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.258 05:02:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.258 05:02:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.258 05:02:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.258 05:02:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.258 05:02:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 6085316 kB' 'MemUsed: 6161000 kB' 'SwapCached: 0 kB' 'Active: 411668 kB' 'Inactive: 4231156 kB' 'Active(anon): 124268 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 4529896 kB' 'Mapped: 58404 kB' 'AnonPages: 142100 kB' 'Shmem: 2592 kB' 'KernelStack: 4976 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181040 kB' 'Slab: 260684 kB' 'SReclaimable: 181040 kB' 'SUnreclaim: 79644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.258 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.517 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.517 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # continue 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.518 05:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.518 05:02:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.518 05:02:13 -- setup/common.sh@33 -- # echo 0 00:04:54.518 05:02:13 -- setup/common.sh@33 -- # return 0 00:04:54.518 05:02:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.518 05:02:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.518 05:02:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.518 05:02:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.518 node0=512 expecting 512 00:04:54.518 05:02:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:54.518 05:02:13 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:54.518 00:04:54.518 real 0m0.649s 00:04:54.518 user 0m0.254s 00:04:54.518 sys 0m0.438s 00:04:54.518 05:02:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.518 05:02:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.518 ************************************ 00:04:54.518 END TEST custom_alloc 00:04:54.518 ************************************ 00:04:54.518 05:02:13 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:54.518 05:02:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.518 05:02:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.518 05:02:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.518 ************************************ 00:04:54.518 START TEST no_shrink_alloc 00:04:54.518 ************************************ 00:04:54.518 05:02:13 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:54.518 05:02:13 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:54.518 05:02:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.518 05:02:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:54.518 05:02:13 -- setup/hugepages.sh@51 -- # shift 00:04:54.518 05:02:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:54.518 05:02:13 -- setup/hugepages.sh@52 -- # local node_ids 00:04:54.518 05:02:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.518 05:02:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.518 05:02:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:54.518 05:02:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:54.518 05:02:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.518 05:02:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.518 05:02:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.518 05:02:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.518 05:02:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.518 05:02:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:54.518 05:02:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.518 05:02:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:54.518 05:02:13 -- setup/hugepages.sh@73 -- # return 0 00:04:54.518 05:02:13 -- setup/hugepages.sh@198 -- # setup output 00:04:54.518 05:02:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.518 05:02:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:54.776 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.037 05:02:14 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:55.037 05:02:14 -- setup/hugepages.sh@89 -- # local node 00:04:55.037 05:02:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.037 05:02:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.037 05:02:14 -- setup/hugepages.sh@92 -- # local surp 00:04:55.037 05:02:14 -- setup/hugepages.sh@93 -- # local resv 00:04:55.037 05:02:14 -- setup/hugepages.sh@94 -- # local anon 00:04:55.037 05:02:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.037 05:02:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.037 05:02:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.037 05:02:14 -- setup/common.sh@18 -- # local node= 00:04:55.037 05:02:14 -- setup/common.sh@19 -- # local var val 00:04:55.037 05:02:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.037 05:02:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.037 05:02:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.037 05:02:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.037 05:02:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.037 05:02:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5036048 kB' 'MemAvailable: 9413888 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 411092 kB' 'Inactive: 4231156 kB' 'Active(anon): 123692 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 141204 kB' 'Mapped: 57520 kB' 'Shmem: 2592 kB' 'KReclaimable: 181036 kB' 'Slab: 260652 kB' 'SReclaimable: 181036 kB' 'SUnreclaim: 79616 kB' 'KernelStack: 5008 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 372972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.037 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.037 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.038 05:02:14 -- setup/common.sh@33 -- # echo 0 00:04:55.038 05:02:14 -- setup/common.sh@33 -- # return 0 00:04:55.038 05:02:14 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.038 05:02:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.038 05:02:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.038 05:02:14 -- setup/common.sh@18 -- # local node= 00:04:55.038 05:02:14 -- setup/common.sh@19 -- # local var val 00:04:55.038 05:02:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.038 05:02:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.038 05:02:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.038 05:02:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.038 05:02:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.038 05:02:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5036048 kB' 'MemAvailable: 9413888 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 410652 kB' 'Inactive: 4231156 kB' 'Active(anon): 123252 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 140760 kB' 'Mapped: 57512 kB' 'Shmem: 2592 kB' 'KReclaimable: 181036 kB' 'Slab: 260652 kB' 'SReclaimable: 181036 kB' 'SUnreclaim: 79616 kB' 'KernelStack: 4976 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 372972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.038 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.038 05:02:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.039 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.039 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.040 05:02:14 -- setup/common.sh@33 -- # echo 0 00:04:55.040 05:02:14 -- setup/common.sh@33 -- # return 0 00:04:55.040 05:02:14 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.040 05:02:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.040 05:02:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.040 05:02:14 -- setup/common.sh@18 -- # local node= 00:04:55.040 05:02:14 -- setup/common.sh@19 -- # local var val 00:04:55.040 05:02:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.040 05:02:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.040 05:02:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.040 05:02:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.040 05:02:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.040 05:02:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5041636 kB' 'MemAvailable: 9419476 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 410692 kB' 'Inactive: 4231156 kB' 'Active(anon): 123292 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 140808 kB' 'Mapped: 57512 kB' 'Shmem: 2592 kB' 'KReclaimable: 181036 kB' 'Slab: 260652 kB' 'SReclaimable: 181036 kB' 'SUnreclaim: 79616 kB' 'KernelStack: 4992 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 372972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.040 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.040 05:02:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.041 05:02:14 -- setup/common.sh@33 -- # echo 0 00:04:55.041 05:02:14 -- setup/common.sh@33 -- # return 0 00:04:55.041 05:02:14 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.041 nr_hugepages=1024 00:04:55.041 05:02:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.041 resv_hugepages=0 00:04:55.041 05:02:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.041 surplus_hugepages=0 00:04:55.041 05:02:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.041 anon_hugepages=0 00:04:55.041 05:02:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.041 05:02:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.041 05:02:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.041 05:02:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.041 05:02:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.041 05:02:14 -- setup/common.sh@18 -- # local node= 00:04:55.041 05:02:14 -- setup/common.sh@19 -- # local var val 00:04:55.041 05:02:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.041 05:02:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.041 05:02:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.041 05:02:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.041 05:02:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.041 05:02:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5041636 kB' 'MemAvailable: 9419476 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 410952 kB' 'Inactive: 4231156 kB' 'Active(anon): 123552 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 141088 kB' 'Mapped: 57512 kB' 'Shmem: 2592 kB' 'KReclaimable: 181036 kB' 'Slab: 260652 kB' 'SReclaimable: 181036 kB' 'SUnreclaim: 79616 kB' 'KernelStack: 4992 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 372972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.041 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.041 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.042 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.042 05:02:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.042 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.042 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.042 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.042 05:02:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.042 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.042 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.042 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.042 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.302 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.302 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.303 05:02:14 -- setup/common.sh@33 -- # echo 1024 00:04:55.303 05:02:14 -- setup/common.sh@33 -- # return 0 00:04:55.303 05:02:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.303 05:02:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.303 05:02:14 -- setup/hugepages.sh@27 -- # local node 00:04:55.303 05:02:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.303 05:02:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.303 05:02:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.303 05:02:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.303 05:02:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.303 05:02:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.303 05:02:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.303 05:02:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.303 05:02:14 -- setup/common.sh@18 -- # local node=0 00:04:55.303 05:02:14 -- setup/common.sh@19 -- # local var val 00:04:55.303 05:02:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.303 05:02:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.303 05:02:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.303 05:02:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.303 05:02:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.303 05:02:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5041468 kB' 'MemUsed: 7204848 kB' 'SwapCached: 0 kB' 'Active: 410656 kB' 'Inactive: 4231156 kB' 'Active(anon): 123256 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 4529896 kB' 'Mapped: 57512 kB' 'AnonPages: 141056 kB' 'Shmem: 2592 kB' 'KernelStack: 4992 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181036 kB' 'Slab: 260652 kB' 'SReclaimable: 181036 kB' 'SUnreclaim: 79616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.303 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.303 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.304 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.304 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.304 05:02:14 -- setup/common.sh@33 -- # echo 0 00:04:55.304 05:02:14 -- setup/common.sh@33 -- # return 0 00:04:55.304 05:02:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.304 05:02:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.304 05:02:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.304 05:02:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.304 node0=1024 expecting 1024 00:04:55.304 05:02:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:55.304 05:02:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:55.304 05:02:14 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:55.304 05:02:14 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:55.304 05:02:14 -- setup/hugepages.sh@202 -- # setup output 00:04:55.304 05:02:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.304 05:02:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.582 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:55.582 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.582 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:55.582 05:02:14 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:55.582 05:02:14 -- setup/hugepages.sh@89 -- # local node 00:04:55.582 05:02:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.582 05:02:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.582 05:02:14 -- setup/hugepages.sh@92 -- # local surp 00:04:55.582 05:02:14 -- setup/hugepages.sh@93 -- # local resv 00:04:55.582 05:02:14 -- setup/hugepages.sh@94 -- # local anon 00:04:55.582 05:02:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.582 05:02:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.582 05:02:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.582 05:02:14 -- setup/common.sh@18 -- # local node= 00:04:55.582 05:02:14 -- setup/common.sh@19 -- # local var val 00:04:55.582 05:02:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.582 05:02:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.582 05:02:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.582 05:02:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.582 05:02:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.582 05:02:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5039948 kB' 'MemAvailable: 9417788 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 411112 kB' 'Inactive: 4231156 kB' 'Active(anon): 123712 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141276 kB' 'Mapped: 57532 kB' 'Shmem: 2592 kB' 'KReclaimable: 181036 kB' 'Slab: 260632 kB' 'SReclaimable: 181036 kB' 'SUnreclaim: 79596 kB' 'KernelStack: 5016 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 372972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.582 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.582 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.583 05:02:14 -- setup/common.sh@33 -- # echo 0 00:04:55.583 05:02:14 -- setup/common.sh@33 -- # return 0 00:04:55.583 05:02:14 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.583 05:02:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.583 05:02:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.583 05:02:14 -- setup/common.sh@18 -- # local node= 00:04:55.583 05:02:14 -- setup/common.sh@19 -- # local var val 00:04:55.583 05:02:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.583 05:02:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.583 05:02:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.583 05:02:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.583 05:02:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.583 05:02:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5040468 kB' 'MemAvailable: 9418308 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 410872 kB' 'Inactive: 4231156 kB' 'Active(anon): 123472 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141272 kB' 'Mapped: 57532 kB' 'Shmem: 2592 kB' 'KReclaimable: 181036 kB' 'Slab: 260640 kB' 'SReclaimable: 181036 kB' 'SUnreclaim: 79604 kB' 'KernelStack: 5000 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 372972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.583 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.583 05:02:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.584 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.584 05:02:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.585 05:02:14 -- setup/common.sh@33 -- # echo 0 00:04:55.585 05:02:14 -- setup/common.sh@33 -- # return 0 00:04:55.585 05:02:14 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.585 05:02:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.585 05:02:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.585 05:02:14 -- setup/common.sh@18 -- # local node= 00:04:55.585 05:02:14 -- setup/common.sh@19 -- # local var val 00:04:55.585 05:02:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.585 05:02:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.585 05:02:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.585 05:02:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.585 05:02:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.585 05:02:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5040480 kB' 'MemAvailable: 9418320 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 410800 kB' 'Inactive: 4231156 kB' 'Active(anon): 123400 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141180 kB' 'Mapped: 57532 kB' 'Shmem: 2592 kB' 'KReclaimable: 181036 kB' 'Slab: 260636 kB' 'SReclaimable: 181036 kB' 'SUnreclaim: 79600 kB' 'KernelStack: 4952 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 372972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.585 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.585 05:02:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.586 05:02:14 -- setup/common.sh@33 -- # echo 0 00:04:55.586 05:02:14 -- setup/common.sh@33 -- # return 0 00:04:55.586 05:02:14 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.586 nr_hugepages=1024 00:04:55.586 05:02:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.586 resv_hugepages=0 00:04:55.586 05:02:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.586 05:02:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.586 surplus_hugepages=0 00:04:55.586 anon_hugepages=0 00:04:55.586 05:02:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.586 05:02:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.586 05:02:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.586 05:02:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.586 05:02:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.586 05:02:14 -- setup/common.sh@18 -- # local node= 00:04:55.586 05:02:14 -- setup/common.sh@19 -- # local var val 00:04:55.586 05:02:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.586 05:02:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.586 05:02:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.586 05:02:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.586 05:02:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.586 05:02:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5040480 kB' 'MemAvailable: 9418320 kB' 'Buffers: 35112 kB' 'Cached: 4494784 kB' 'SwapCached: 0 kB' 'Active: 410784 kB' 'Inactive: 4231156 kB' 'Active(anon): 123384 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 140932 kB' 'Mapped: 57532 kB' 'Shmem: 2592 kB' 'KReclaimable: 181036 kB' 'Slab: 260636 kB' 'SReclaimable: 181036 kB' 'SUnreclaim: 79600 kB' 'KernelStack: 4952 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074580 kB' 'Committed_AS: 372972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.586 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.586 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.587 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.587 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.588 05:02:14 -- setup/common.sh@33 -- # echo 1024 00:04:55.588 05:02:14 -- setup/common.sh@33 -- # return 0 00:04:55.588 05:02:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.588 05:02:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.588 05:02:14 -- setup/hugepages.sh@27 -- # local node 00:04:55.588 05:02:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.588 05:02:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.588 05:02:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.588 05:02:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.588 05:02:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.588 05:02:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.588 05:02:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.588 05:02:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.588 05:02:14 -- setup/common.sh@18 -- # local node=0 00:04:55.588 05:02:14 -- setup/common.sh@19 -- # local var val 00:04:55.588 05:02:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.588 05:02:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.588 05:02:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.588 05:02:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.588 05:02:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.588 05:02:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246316 kB' 'MemFree: 5040480 kB' 'MemUsed: 7205836 kB' 'SwapCached: 0 kB' 'Active: 410784 kB' 'Inactive: 4231156 kB' 'Active(anon): 123384 kB' 'Inactive(anon): 0 kB' 'Active(file): 287400 kB' 'Inactive(file): 4231156 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4529896 kB' 'Mapped: 57532 kB' 'AnonPages: 141192 kB' 'Shmem: 2592 kB' 'KernelStack: 5020 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181036 kB' 'Slab: 260636 kB' 'SReclaimable: 181036 kB' 'SUnreclaim: 79600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.588 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.588 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # continue 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.589 05:02:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.589 05:02:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.589 05:02:14 -- setup/common.sh@33 -- # echo 0 00:04:55.589 05:02:14 -- setup/common.sh@33 -- # return 0 00:04:55.589 05:02:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.589 05:02:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.589 05:02:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.589 05:02:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.589 node0=1024 expecting 1024 00:04:55.589 05:02:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:55.589 05:02:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:55.589 00:04:55.589 real 0m1.224s 00:04:55.589 user 0m0.482s 00:04:55.589 sys 0m0.824s 00:04:55.589 05:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.589 05:02:14 -- common/autotest_common.sh@10 -- # set +x 00:04:55.589 ************************************ 00:04:55.589 END TEST no_shrink_alloc 00:04:55.589 ************************************ 00:04:55.857 05:02:14 -- setup/hugepages.sh@217 -- # clear_hp 00:04:55.857 05:02:14 -- setup/hugepages.sh@37 -- # local node hp 00:04:55.857 05:02:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:55.857 05:02:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.857 05:02:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:55.857 05:02:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.857 05:02:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:55.857 05:02:14 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:55.857 05:02:14 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:55.857 00:04:55.857 real 0m5.300s 00:04:55.857 user 0m1.847s 00:04:55.857 sys 0m3.675s 00:04:55.857 05:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.857 05:02:14 -- common/autotest_common.sh@10 -- # set +x 00:04:55.857 ************************************ 00:04:55.857 END TEST hugepages 00:04:55.857 ************************************ 00:04:55.857 05:02:14 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:55.857 05:02:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.857 05:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.857 05:02:14 -- common/autotest_common.sh@10 -- # set +x 00:04:55.857 ************************************ 00:04:55.857 START TEST driver 00:04:55.857 ************************************ 00:04:55.857 05:02:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:55.857 * Looking for test storage... 00:04:55.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:55.857 05:02:14 -- setup/driver.sh@68 -- # setup reset 00:04:55.857 05:02:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.857 05:02:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.426 05:02:15 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:56.426 05:02:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.426 05:02:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.426 05:02:15 -- common/autotest_common.sh@10 -- # set +x 00:04:56.426 ************************************ 00:04:56.426 START TEST guess_driver 00:04:56.426 ************************************ 00:04:56.426 05:02:15 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:56.426 05:02:15 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:56.426 05:02:15 -- setup/driver.sh@47 -- # local fail=0 00:04:56.426 05:02:15 -- setup/driver.sh@49 -- # pick_driver 00:04:56.426 05:02:15 -- setup/driver.sh@36 -- # vfio 00:04:56.426 05:02:15 -- setup/driver.sh@21 -- # local iommu_grups 00:04:56.426 05:02:15 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:56.426 05:02:15 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:56.426 05:02:15 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:56.426 05:02:15 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:56.426 05:02:15 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:56.426 05:02:15 -- setup/driver.sh@32 -- # return 1 00:04:56.426 05:02:15 -- setup/driver.sh@38 -- # uio 00:04:56.426 05:02:15 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:56.426 05:02:15 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:56.426 05:02:15 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:56.426 05:02:15 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:56.426 05:02:15 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio.ko.zst 00:04:56.426 insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio_pci_generic.ko.zst == *\.\k\o* ]] 00:04:56.426 05:02:15 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:56.426 05:02:15 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:56.426 05:02:15 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:56.426 Looking for driver=uio_pci_generic 00:04:56.426 05:02:15 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:56.426 05:02:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.426 05:02:15 -- setup/driver.sh@45 -- # setup output config 00:04:56.426 05:02:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.426 05:02:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.685 05:02:15 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:56.685 05:02:15 -- setup/driver.sh@58 -- # continue 00:04:56.685 05:02:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.685 05:02:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.685 05:02:15 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:56.685 05:02:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.252 05:02:16 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:57.252 05:02:16 -- setup/driver.sh@65 -- # setup reset 00:04:57.252 05:02:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.252 05:02:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.819 00:04:57.819 real 0m1.521s 00:04:57.819 user 0m0.359s 00:04:57.819 sys 0m1.200s 00:04:57.819 05:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.819 05:02:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.819 ************************************ 00:04:57.819 END TEST guess_driver 00:04:57.819 ************************************ 00:04:57.819 00:04:57.819 real 0m2.077s 00:04:57.819 user 0m0.555s 00:04:57.819 sys 0m1.621s 00:04:57.819 05:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.819 05:02:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.819 ************************************ 00:04:57.819 END TEST driver 00:04:57.819 ************************************ 00:04:57.819 05:02:16 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:57.819 05:02:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.819 05:02:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.819 05:02:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.819 ************************************ 00:04:57.819 START TEST devices 00:04:57.819 ************************************ 00:04:57.819 05:02:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:58.077 * Looking for test storage... 00:04:58.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:58.077 05:02:16 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:58.077 05:02:16 -- setup/devices.sh@192 -- # setup reset 00:04:58.077 05:02:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.077 05:02:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.336 05:02:17 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:58.336 05:02:17 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:58.336 05:02:17 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:58.336 05:02:17 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:58.336 05:02:17 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:58.336 05:02:17 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:58.336 05:02:17 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:58.336 05:02:17 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:58.336 05:02:17 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:58.336 05:02:17 -- setup/devices.sh@196 -- # blocks=() 00:04:58.336 05:02:17 -- setup/devices.sh@196 -- # declare -a blocks 00:04:58.336 05:02:17 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:58.336 05:02:17 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:58.336 05:02:17 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:58.336 05:02:17 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.337 05:02:17 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:58.337 05:02:17 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:58.337 05:02:17 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:58.337 05:02:17 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:58.337 05:02:17 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:58.337 05:02:17 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:58.337 05:02:17 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:58.596 No valid GPT data, bailing 00:04:58.596 05:02:17 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:58.596 05:02:17 -- scripts/common.sh@393 -- # pt= 00:04:58.596 05:02:17 -- scripts/common.sh@394 -- # return 1 00:04:58.596 05:02:17 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:58.596 05:02:17 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:58.596 05:02:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:58.596 05:02:17 -- setup/common.sh@80 -- # echo 5368709120 00:04:58.596 05:02:17 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:58.596 05:02:17 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.596 05:02:17 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:58.596 05:02:17 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:58.596 05:02:17 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:58.596 05:02:17 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:58.596 05:02:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.596 05:02:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.596 05:02:17 -- common/autotest_common.sh@10 -- # set +x 00:04:58.596 ************************************ 00:04:58.596 START TEST nvme_mount 00:04:58.596 ************************************ 00:04:58.596 05:02:17 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:58.596 05:02:17 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:58.596 05:02:17 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:58.596 05:02:17 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.596 05:02:17 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.596 05:02:17 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:58.596 05:02:17 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:58.596 05:02:17 -- setup/common.sh@40 -- # local part_no=1 00:04:58.596 05:02:17 -- setup/common.sh@41 -- # local size=1073741824 00:04:58.596 05:02:17 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.596 05:02:17 -- setup/common.sh@44 -- # parts=() 00:04:58.596 05:02:17 -- setup/common.sh@44 -- # local parts 00:04:58.596 05:02:17 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.596 05:02:17 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.596 05:02:17 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.596 05:02:17 -- setup/common.sh@46 -- # (( part++ )) 00:04:58.596 05:02:17 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.596 05:02:17 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:58.596 05:02:17 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:58.596 05:02:17 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:59.533 Creating new GPT entries in memory. 00:04:59.533 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.533 other utilities. 00:04:59.533 05:02:18 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.533 05:02:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.533 05:02:18 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.533 05:02:18 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.533 05:02:18 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:00.470 Creating new GPT entries in memory. 00:05:00.470 The operation has completed successfully. 00:05:00.470 05:02:19 -- setup/common.sh@57 -- # (( part++ )) 00:05:00.470 05:02:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.470 05:02:19 -- setup/common.sh@62 -- # wait 55319 00:05:00.729 05:02:19 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.729 05:02:19 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:00.729 05:02:19 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.729 05:02:19 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:00.729 05:02:19 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:00.729 05:02:19 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.729 05:02:19 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.729 05:02:19 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:00.729 05:02:19 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:00.729 05:02:19 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.729 05:02:19 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.729 05:02:19 -- setup/devices.sh@53 -- # local found=0 00:05:00.729 05:02:19 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.729 05:02:19 -- setup/devices.sh@56 -- # : 00:05:00.729 05:02:19 -- setup/devices.sh@59 -- # local pci status 00:05:00.729 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.729 05:02:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:00.729 05:02:19 -- setup/devices.sh@47 -- # setup output config 00:05:00.729 05:02:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.729 05:02:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.729 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.729 05:02:19 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:00.729 05:02:19 -- setup/devices.sh@63 -- # found=1 00:05:00.729 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.988 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.988 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.988 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.988 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.556 05:02:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.556 05:02:20 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:01.556 05:02:20 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.556 05:02:20 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.556 05:02:20 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.556 05:02:20 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:01.556 05:02:20 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.556 05:02:20 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.556 05:02:20 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.556 05:02:20 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:01.556 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.556 05:02:20 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.556 05:02:20 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.815 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.815 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.815 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.815 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.815 05:02:20 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:01.815 05:02:20 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:01.815 05:02:20 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.815 05:02:20 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:01.815 05:02:20 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:01.815 05:02:20 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.815 05:02:20 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.815 05:02:20 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:01.815 05:02:20 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:01.815 05:02:20 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.815 05:02:20 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.815 05:02:20 -- setup/devices.sh@53 -- # local found=0 00:05:01.815 05:02:20 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.815 05:02:20 -- setup/devices.sh@56 -- # : 00:05:01.815 05:02:20 -- setup/devices.sh@59 -- # local pci status 00:05:01.815 05:02:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.815 05:02:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:01.815 05:02:20 -- setup/devices.sh@47 -- # setup output config 00:05:01.815 05:02:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.815 05:02:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:02.074 05:02:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:02.074 05:02:21 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:02.074 05:02:21 -- setup/devices.sh@63 -- # found=1 00:05:02.074 05:02:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.074 05:02:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:02.074 05:02:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.074 05:02:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:02.074 05:02:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.642 05:02:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.642 05:02:21 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:02.642 05:02:21 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.642 05:02:21 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.642 05:02:21 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:02.642 05:02:21 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.642 05:02:21 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:02.642 05:02:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:02.642 05:02:21 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:02.642 05:02:21 -- setup/devices.sh@50 -- # local mount_point= 00:05:02.642 05:02:21 -- setup/devices.sh@51 -- # local test_file= 00:05:02.642 05:02:21 -- setup/devices.sh@53 -- # local found=0 00:05:02.642 05:02:21 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:02.642 05:02:21 -- setup/devices.sh@59 -- # local pci status 00:05:02.642 05:02:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.642 05:02:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:02.642 05:02:21 -- setup/devices.sh@47 -- # setup output config 00:05:02.642 05:02:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.642 05:02:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:02.901 05:02:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:02.901 05:02:21 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:02.901 05:02:21 -- setup/devices.sh@63 -- # found=1 00:05:02.901 05:02:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.901 05:02:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:02.901 05:02:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.160 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.160 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.727 05:02:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.727 05:02:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:03.727 05:02:22 -- setup/devices.sh@68 -- # return 0 00:05:03.727 05:02:22 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:03.727 05:02:22 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.727 05:02:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.727 05:02:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.727 05:02:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.727 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.727 00:05:03.727 real 0m5.143s 00:05:03.727 user 0m0.547s 00:05:03.727 sys 0m2.399s 00:05:03.727 05:02:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.727 05:02:22 -- common/autotest_common.sh@10 -- # set +x 00:05:03.727 ************************************ 00:05:03.727 END TEST nvme_mount 00:05:03.727 ************************************ 00:05:03.727 05:02:22 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:03.727 05:02:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.727 05:02:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.727 05:02:22 -- common/autotest_common.sh@10 -- # set +x 00:05:03.727 ************************************ 00:05:03.727 START TEST dm_mount 00:05:03.727 ************************************ 00:05:03.727 05:02:22 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:03.727 05:02:22 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:03.727 05:02:22 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:03.727 05:02:22 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:03.727 05:02:22 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:03.727 05:02:22 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:03.727 05:02:22 -- setup/common.sh@40 -- # local part_no=2 00:05:03.727 05:02:22 -- setup/common.sh@41 -- # local size=1073741824 00:05:03.727 05:02:22 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:03.727 05:02:22 -- setup/common.sh@44 -- # parts=() 00:05:03.727 05:02:22 -- setup/common.sh@44 -- # local parts 00:05:03.727 05:02:22 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:03.727 05:02:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.727 05:02:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.727 05:02:22 -- setup/common.sh@46 -- # (( part++ )) 00:05:03.727 05:02:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.727 05:02:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.727 05:02:22 -- setup/common.sh@46 -- # (( part++ )) 00:05:03.727 05:02:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.727 05:02:22 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:03.727 05:02:22 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:03.727 05:02:22 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:04.671 Creating new GPT entries in memory. 00:05:04.671 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:04.671 other utilities. 00:05:04.671 05:02:23 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:04.671 05:02:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.671 05:02:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.671 05:02:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.671 05:02:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:06.063 Creating new GPT entries in memory. 00:05:06.063 The operation has completed successfully. 00:05:06.063 05:02:24 -- setup/common.sh@57 -- # (( part++ )) 00:05:06.063 05:02:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.063 05:02:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:06.063 05:02:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:06.063 05:02:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:06.999 The operation has completed successfully. 00:05:06.999 05:02:25 -- setup/common.sh@57 -- # (( part++ )) 00:05:06.999 05:02:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.999 05:02:25 -- setup/common.sh@62 -- # wait 55743 00:05:06.999 05:02:25 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:06.999 05:02:25 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.999 05:02:25 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:06.999 05:02:25 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:06.999 05:02:25 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:06.999 05:02:25 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.999 05:02:25 -- setup/devices.sh@161 -- # break 00:05:06.999 05:02:25 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.999 05:02:25 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:06.999 05:02:25 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:06.999 05:02:25 -- setup/devices.sh@166 -- # dm=dm-0 00:05:06.999 05:02:25 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:06.999 05:02:25 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:06.999 05:02:25 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.999 05:02:25 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:06.999 05:02:25 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.999 05:02:25 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.999 05:02:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:06.999 05:02:25 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.999 05:02:25 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:06.999 05:02:25 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:06.999 05:02:25 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:06.999 05:02:25 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.999 05:02:25 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:06.999 05:02:25 -- setup/devices.sh@53 -- # local found=0 00:05:06.999 05:02:25 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.999 05:02:25 -- setup/devices.sh@56 -- # : 00:05:06.999 05:02:25 -- setup/devices.sh@59 -- # local pci status 00:05:06.999 05:02:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.999 05:02:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:06.999 05:02:25 -- setup/devices.sh@47 -- # setup output config 00:05:06.999 05:02:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.999 05:02:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.999 05:02:26 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:06.999 05:02:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:06.999 05:02:26 -- setup/devices.sh@63 -- # found=1 00:05:06.999 05:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.999 05:02:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:06.999 05:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.258 05:02:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:07.258 05:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.825 05:02:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.825 05:02:26 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:07.825 05:02:26 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:07.825 05:02:26 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:07.825 05:02:26 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:07.825 05:02:26 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:07.825 05:02:26 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:07.825 05:02:26 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:07.825 05:02:26 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:07.825 05:02:26 -- setup/devices.sh@50 -- # local mount_point= 00:05:07.825 05:02:26 -- setup/devices.sh@51 -- # local test_file= 00:05:07.825 05:02:26 -- setup/devices.sh@53 -- # local found=0 00:05:07.825 05:02:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:07.825 05:02:26 -- setup/devices.sh@59 -- # local pci status 00:05:07.825 05:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.825 05:02:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:07.825 05:02:26 -- setup/devices.sh@47 -- # setup output config 00:05:07.825 05:02:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.825 05:02:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.084 05:02:26 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.084 05:02:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:08.084 05:02:26 -- setup/devices.sh@63 -- # found=1 00:05:08.084 05:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.084 05:02:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.084 05:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.084 05:02:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.084 05:02:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.652 05:02:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.652 05:02:27 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:08.652 05:02:27 -- setup/devices.sh@68 -- # return 0 00:05:08.652 05:02:27 -- setup/devices.sh@187 -- # cleanup_dm 00:05:08.652 05:02:27 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.652 05:02:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.652 05:02:27 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:08.652 05:02:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.652 05:02:27 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:08.652 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.652 05:02:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.652 05:02:27 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:08.652 00:05:08.652 real 0m4.962s 00:05:08.652 user 0m0.335s 00:05:08.652 sys 0m1.576s 00:05:08.652 05:02:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.652 05:02:27 -- common/autotest_common.sh@10 -- # set +x 00:05:08.652 ************************************ 00:05:08.652 END TEST dm_mount 00:05:08.652 ************************************ 00:05:08.652 05:02:27 -- setup/devices.sh@1 -- # cleanup 00:05:08.652 05:02:27 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:08.652 05:02:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.652 05:02:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.653 05:02:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:08.653 05:02:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.653 05:02:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.911 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:08.912 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:08.912 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.912 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.912 05:02:27 -- setup/devices.sh@12 -- # cleanup_dm 00:05:08.912 05:02:27 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.912 05:02:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.912 05:02:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.912 05:02:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.912 05:02:27 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.912 05:02:27 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:08.912 00:05:08.912 real 0m11.113s 00:05:08.912 user 0m1.182s 00:05:08.912 sys 0m4.452s 00:05:08.912 05:02:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.912 05:02:27 -- common/autotest_common.sh@10 -- # set +x 00:05:08.912 ************************************ 00:05:08.912 END TEST devices 00:05:08.912 ************************************ 00:05:09.171 00:05:09.171 real 0m22.597s 00:05:09.171 user 0m4.738s 00:05:09.171 sys 0m12.861s 00:05:09.171 05:02:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.171 05:02:28 -- common/autotest_common.sh@10 -- # set +x 00:05:09.171 ************************************ 00:05:09.171 END TEST setup.sh 00:05:09.171 ************************************ 00:05:09.171 05:02:28 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:09.171 Hugepages 00:05:09.171 node hugesize free / total 00:05:09.171 node0 1048576kB 0 / 0 00:05:09.171 node0 2048kB 2048 / 2048 00:05:09.171 00:05:09.171 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:09.430 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:09.430 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:09.430 05:02:28 -- spdk/autotest.sh@141 -- # uname -s 00:05:09.430 05:02:28 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:09.430 05:02:28 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:09.430 05:02:28 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:09.947 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.514 05:02:29 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:11.449 05:02:30 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:11.449 05:02:30 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:11.449 05:02:30 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:11.449 05:02:30 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:11.449 05:02:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:11.449 05:02:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:11.449 05:02:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.449 05:02:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:11.449 05:02:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:11.449 05:02:30 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:11.449 05:02:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:11.449 05:02:30 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:11.966 Waiting for block devices as requested 00:05:11.966 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.966 05:02:30 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:11.966 05:02:30 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:11.966 05:02:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:11.966 05:02:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:11.966 05:02:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:11.966 05:02:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:11.966 05:02:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:11.966 05:02:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:11.966 05:02:30 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:11.966 05:02:30 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:11.966 05:02:30 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:11.966 05:02:30 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:11.966 05:02:30 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:11.966 05:02:30 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:11.966 05:02:30 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:11.966 05:02:30 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:11.966 05:02:30 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:11.966 05:02:30 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:11.966 05:02:30 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:11.966 05:02:30 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:11.966 05:02:30 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:11.966 05:02:30 -- common/autotest_common.sh@1542 -- # continue 00:05:11.966 05:02:30 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:11.966 05:02:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:11.966 05:02:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.966 05:02:31 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:11.966 05:02:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:11.966 05:02:31 -- common/autotest_common.sh@10 -- # set +x 00:05:11.966 05:02:31 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:12.532 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.100 05:02:32 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:13.100 05:02:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:13.100 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:05:13.100 05:02:32 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:13.100 05:02:32 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:13.100 05:02:32 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.100 05:02:32 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:13.100 05:02:32 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:13.100 05:02:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:13.100 05:02:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:13.100 05:02:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:13.100 05:02:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.100 05:02:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:13.100 05:02:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:13.100 05:02:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:13.100 05:02:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:13.100 05:02:32 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.100 05:02:32 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:13.100 05:02:32 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.100 05:02:32 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.100 05:02:32 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:13.100 05:02:32 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:13.100 05:02:32 -- common/autotest_common.sh@1578 -- # return 0 00:05:13.100 05:02:32 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:05:13.100 05:02:32 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:13.100 05:02:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.100 05:02:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.100 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:05:13.100 ************************************ 00:05:13.100 START TEST unittest 00:05:13.100 ************************************ 00:05:13.100 05:02:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:13.360 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:13.360 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:13.360 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:13.360 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:13.360 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:13.360 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:13.360 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:13.360 ++ rpc_py=rpc_cmd 00:05:13.360 ++ set -e 00:05:13.360 ++ shopt -s nullglob 00:05:13.360 ++ shopt -s extglob 00:05:13.360 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:13.360 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:13.360 +++ CONFIG_WPDK_DIR= 00:05:13.360 +++ CONFIG_ASAN=y 00:05:13.360 +++ CONFIG_VBDEV_COMPRESS=n 00:05:13.360 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:13.360 +++ CONFIG_USDT=n 00:05:13.360 +++ CONFIG_CUSTOMOCF=n 00:05:13.360 +++ CONFIG_PREFIX=/usr/local 00:05:13.360 +++ CONFIG_RBD=n 00:05:13.360 +++ CONFIG_LIBDIR= 00:05:13.360 +++ CONFIG_IDXD=y 00:05:13.360 +++ CONFIG_NVME_CUSE=y 00:05:13.360 +++ CONFIG_SMA=n 00:05:13.360 +++ CONFIG_VTUNE=n 00:05:13.360 +++ CONFIG_TSAN=n 00:05:13.360 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:13.360 +++ CONFIG_VFIO_USER_DIR= 00:05:13.360 +++ CONFIG_PGO_CAPTURE=n 00:05:13.360 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:13.360 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:13.360 +++ CONFIG_LTO=n 00:05:13.360 +++ CONFIG_ISCSI_INITIATOR=y 00:05:13.360 +++ CONFIG_CET=n 00:05:13.360 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:13.360 +++ CONFIG_OCF_PATH= 00:05:13.360 +++ CONFIG_RDMA_SET_TOS=y 00:05:13.360 +++ CONFIG_HAVE_ARC4RANDOM=y 00:05:13.360 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:13.360 +++ CONFIG_UBLK=y 00:05:13.360 +++ CONFIG_ISAL_CRYPTO=y 00:05:13.360 +++ CONFIG_OPENSSL_PATH= 00:05:13.360 +++ CONFIG_OCF=n 00:05:13.360 +++ CONFIG_FUSE=n 00:05:13.360 +++ CONFIG_VTUNE_DIR= 00:05:13.360 +++ CONFIG_FUZZER_LIB= 00:05:13.360 +++ CONFIG_FUZZER=n 00:05:13.360 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:13.360 +++ CONFIG_CRYPTO=n 00:05:13.360 +++ CONFIG_PGO_USE=n 00:05:13.360 +++ CONFIG_VHOST=y 00:05:13.360 +++ CONFIG_DAOS=n 00:05:13.360 +++ CONFIG_DPDK_INC_DIR= 00:05:13.360 +++ CONFIG_DAOS_DIR= 00:05:13.360 +++ CONFIG_UNIT_TESTS=y 00:05:13.360 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:13.360 +++ CONFIG_VIRTIO=y 00:05:13.360 +++ CONFIG_COVERAGE=y 00:05:13.360 +++ CONFIG_RDMA=y 00:05:13.360 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:13.360 +++ CONFIG_URING_PATH= 00:05:13.360 +++ CONFIG_XNVME=n 00:05:13.360 +++ CONFIG_VFIO_USER=n 00:05:13.360 +++ CONFIG_ARCH=native 00:05:13.360 +++ CONFIG_URING_ZNS=n 00:05:13.360 +++ CONFIG_WERROR=y 00:05:13.360 +++ CONFIG_HAVE_LIBBSD=n 00:05:13.360 +++ CONFIG_UBSAN=y 00:05:13.360 +++ CONFIG_IPSEC_MB_DIR= 00:05:13.360 +++ CONFIG_GOLANG=n 00:05:13.360 +++ CONFIG_ISAL=y 00:05:13.360 +++ CONFIG_IDXD_KERNEL=y 00:05:13.360 +++ CONFIG_DPDK_LIB_DIR= 00:05:13.360 +++ CONFIG_RDMA_PROV=verbs 00:05:13.360 +++ CONFIG_APPS=y 00:05:13.360 +++ CONFIG_SHARED=n 00:05:13.360 +++ CONFIG_FC_PATH= 00:05:13.360 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:13.360 +++ CONFIG_FC=n 00:05:13.360 +++ CONFIG_AVAHI=n 00:05:13.360 +++ CONFIG_FIO_PLUGIN=y 00:05:13.360 +++ CONFIG_RAID5F=y 00:05:13.360 +++ CONFIG_EXAMPLES=y 00:05:13.360 +++ CONFIG_TESTS=y 00:05:13.360 +++ CONFIG_CRYPTO_MLX5=n 00:05:13.360 +++ CONFIG_MAX_LCORES= 00:05:13.360 +++ CONFIG_IPSEC_MB=n 00:05:13.360 +++ CONFIG_DEBUG=y 00:05:13.360 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:13.360 +++ CONFIG_CROSS_PREFIX= 00:05:13.360 +++ CONFIG_URING=n 00:05:13.360 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:13.360 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:13.360 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:13.360 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:13.360 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:13.360 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:13.360 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:13.360 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:13.360 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:13.360 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:13.360 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:13.360 +++ VHOST_APP=("$_app_dir/vhost") 00:05:13.360 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:13.360 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:13.360 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:13.360 +++ [[ #ifndef SPDK_CONFIG_H 00:05:13.360 #define SPDK_CONFIG_H 00:05:13.360 #define SPDK_CONFIG_APPS 1 00:05:13.360 #define SPDK_CONFIG_ARCH native 00:05:13.360 #define SPDK_CONFIG_ASAN 1 00:05:13.360 #undef SPDK_CONFIG_AVAHI 00:05:13.360 #undef SPDK_CONFIG_CET 00:05:13.360 #define SPDK_CONFIG_COVERAGE 1 00:05:13.360 #define SPDK_CONFIG_CROSS_PREFIX 00:05:13.360 #undef SPDK_CONFIG_CRYPTO 00:05:13.360 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:13.360 #undef SPDK_CONFIG_CUSTOMOCF 00:05:13.360 #undef SPDK_CONFIG_DAOS 00:05:13.360 #define SPDK_CONFIG_DAOS_DIR 00:05:13.360 #define SPDK_CONFIG_DEBUG 1 00:05:13.360 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:13.360 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:13.360 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:13.360 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:13.360 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:13.360 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:13.360 #define SPDK_CONFIG_EXAMPLES 1 00:05:13.360 #undef SPDK_CONFIG_FC 00:05:13.360 #define SPDK_CONFIG_FC_PATH 00:05:13.360 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:13.360 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:13.360 #undef SPDK_CONFIG_FUSE 00:05:13.360 #undef SPDK_CONFIG_FUZZER 00:05:13.360 #define SPDK_CONFIG_FUZZER_LIB 00:05:13.360 #undef SPDK_CONFIG_GOLANG 00:05:13.360 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:13.360 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:13.360 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:13.360 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:13.360 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:13.360 #define SPDK_CONFIG_IDXD 1 00:05:13.360 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:13.360 #undef SPDK_CONFIG_IPSEC_MB 00:05:13.360 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:13.360 #define SPDK_CONFIG_ISAL 1 00:05:13.360 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:13.360 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:13.360 #define SPDK_CONFIG_LIBDIR 00:05:13.360 #undef SPDK_CONFIG_LTO 00:05:13.360 #define SPDK_CONFIG_MAX_LCORES 00:05:13.360 #define SPDK_CONFIG_NVME_CUSE 1 00:05:13.360 #undef SPDK_CONFIG_OCF 00:05:13.360 #define SPDK_CONFIG_OCF_PATH 00:05:13.360 #define SPDK_CONFIG_OPENSSL_PATH 00:05:13.360 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:13.360 #undef SPDK_CONFIG_PGO_USE 00:05:13.360 #define SPDK_CONFIG_PREFIX /usr/local 00:05:13.360 #define SPDK_CONFIG_RAID5F 1 00:05:13.360 #undef SPDK_CONFIG_RBD 00:05:13.360 #define SPDK_CONFIG_RDMA 1 00:05:13.360 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:13.360 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:13.360 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:13.360 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:13.360 #undef SPDK_CONFIG_SHARED 00:05:13.360 #undef SPDK_CONFIG_SMA 00:05:13.360 #define SPDK_CONFIG_TESTS 1 00:05:13.360 #undef SPDK_CONFIG_TSAN 00:05:13.360 #define SPDK_CONFIG_UBLK 1 00:05:13.360 #define SPDK_CONFIG_UBSAN 1 00:05:13.360 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:13.360 #undef SPDK_CONFIG_URING 00:05:13.360 #define SPDK_CONFIG_URING_PATH 00:05:13.360 #undef SPDK_CONFIG_URING_ZNS 00:05:13.360 #undef SPDK_CONFIG_USDT 00:05:13.360 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:13.360 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:13.360 #undef SPDK_CONFIG_VFIO_USER 00:05:13.360 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:13.360 #define SPDK_CONFIG_VHOST 1 00:05:13.360 #define SPDK_CONFIG_VIRTIO 1 00:05:13.360 #undef SPDK_CONFIG_VTUNE 00:05:13.360 #define SPDK_CONFIG_VTUNE_DIR 00:05:13.360 #define SPDK_CONFIG_WERROR 1 00:05:13.360 #define SPDK_CONFIG_WPDK_DIR 00:05:13.360 #undef SPDK_CONFIG_XNVME 00:05:13.360 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:13.360 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:13.360 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:13.360 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:13.360 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.360 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.360 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:13.361 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:13.361 ++++ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:13.361 ++++ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:13.361 ++++ export PATH 00:05:13.361 ++++ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:13.361 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:13.361 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:13.361 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:13.361 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:13.361 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:13.361 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:13.361 +++ TEST_TAG=N/A 00:05:13.361 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:13.361 ++ : 1 00:05:13.361 ++ export RUN_NIGHTLY 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_RUN_VALGRIND 00:05:13.361 ++ : 1 00:05:13.361 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:13.361 ++ : 1 00:05:13.361 ++ export SPDK_TEST_UNITTEST 00:05:13.361 ++ : 00:05:13.361 ++ export SPDK_TEST_AUTOBUILD 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_RELEASE_BUILD 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_ISAL 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_ISCSI 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:13.361 ++ : 1 00:05:13.361 ++ export SPDK_TEST_NVME 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_NVME_PMR 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_NVME_BP 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_NVME_CLI 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_NVME_CUSE 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_NVME_FDP 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_NVMF 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_VFIOUSER 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_FUZZER 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_FUZZER_SHORT 00:05:13.361 ++ : rdma 00:05:13.361 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_RBD 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_VHOST 00:05:13.361 ++ : 1 00:05:13.361 ++ export SPDK_TEST_BLOCKDEV 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_IOAT 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_BLOBFS 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_VHOST_INIT 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_LVOL 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:13.361 ++ : 1 00:05:13.361 ++ export SPDK_RUN_ASAN 00:05:13.361 ++ : 1 00:05:13.361 ++ export SPDK_RUN_UBSAN 00:05:13.361 ++ : 00:05:13.361 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_RUN_NON_ROOT 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_CRYPTO 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_FTL 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_OCF 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_VMD 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_OPAL 00:05:13.361 ++ : 00:05:13.361 ++ export SPDK_TEST_NATIVE_DPDK 00:05:13.361 ++ : true 00:05:13.361 ++ export SPDK_AUTOTEST_X 00:05:13.361 ++ : 1 00:05:13.361 ++ export SPDK_TEST_RAID5 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_URING 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_USDT 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_USE_IGB_UIO 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_SCHEDULER 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_SCANBUILD 00:05:13.361 ++ : 00:05:13.361 ++ export SPDK_TEST_NVMF_NICS 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_SMA 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_DAOS 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_XNVME 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_ACCEL_DSA 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_ACCEL_IAA 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_ACCEL_IOAT 00:05:13.361 ++ : 00:05:13.361 ++ export SPDK_TEST_FUZZER_TARGET 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_TEST_NVMF_MDNS 00:05:13.361 ++ : 0 00:05:13.361 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:13.361 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:13.361 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:13.361 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:13.361 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:13.361 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:13.361 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:13.361 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:13.361 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:13.361 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:13.361 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:13.361 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:13.361 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:13.361 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:13.361 ++ PYTHONDONTWRITEBYTECODE=1 00:05:13.361 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:13.361 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:13.361 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:13.361 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:13.361 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:13.361 ++ rm -rf /var/tmp/asan_suppression_file 00:05:13.361 ++ cat 00:05:13.361 ++ echo leak:libfuse3.so 00:05:13.361 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:13.361 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:13.361 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:13.361 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:13.361 ++ '[' -z /var/spdk/dependencies ']' 00:05:13.361 ++ export DEPENDENCY_DIR 00:05:13.361 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:13.361 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:13.361 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:13.361 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:13.361 ++ export QEMU_BIN= 00:05:13.361 ++ QEMU_BIN= 00:05:13.361 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:13.361 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:13.361 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:13.361 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:13.361 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:13.361 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:13.361 ++ '[' 0 -eq 0 ']' 00:05:13.361 ++ export valgrind= 00:05:13.361 ++ valgrind= 00:05:13.361 +++ uname -s 00:05:13.361 ++ '[' Linux = Linux ']' 00:05:13.361 ++ HUGEMEM=4096 00:05:13.361 ++ export CLEAR_HUGE=yes 00:05:13.361 ++ CLEAR_HUGE=yes 00:05:13.361 ++ [[ 0 -eq 1 ]] 00:05:13.361 ++ [[ 0 -eq 1 ]] 00:05:13.361 ++ MAKE=make 00:05:13.361 +++ nproc 00:05:13.361 ++ MAKEFLAGS=-j10 00:05:13.361 ++ export HUGEMEM=4096 00:05:13.361 ++ HUGEMEM=4096 00:05:13.361 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:13.361 ++ NO_HUGE=() 00:05:13.361 ++ TEST_MODE= 00:05:13.361 ++ [[ -z '' ]] 00:05:13.361 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:13.361 ++ exec 00:05:13.361 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:13.361 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:13.361 ++ set_test_storage 2147483648 00:05:13.361 ++ [[ -v testdir ]] 00:05:13.361 ++ local requested_size=2147483648 00:05:13.361 ++ local mount target_dir 00:05:13.361 ++ local -A mounts fss sizes avails uses 00:05:13.361 ++ local source fs size avail mount use 00:05:13.361 ++ local storage_fallback storage_candidates 00:05:13.361 +++ mktemp -udt spdk.XXXXXX 00:05:13.361 ++ storage_fallback=/tmp/spdk.iVLGX9 00:05:13.362 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:13.362 ++ [[ -n '' ]] 00:05:13.362 ++ [[ -n '' ]] 00:05:13.362 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.iVLGX9/tests/unit /tmp/spdk.iVLGX9 00:05:13.362 ++ requested_size=2214592512 00:05:13.362 ++ read -r source fs size use avail _ mount 00:05:13.362 +++ df -T 00:05:13.362 +++ grep -v Filesystem 00:05:13.362 ++ mounts["$mount"]=tmpfs 00:05:13.362 ++ fss["$mount"]=tmpfs 00:05:13.362 ++ avails["$mount"]=1252954112 00:05:13.362 ++ sizes["$mount"]=1254023168 00:05:13.362 ++ uses["$mount"]=1069056 00:05:13.362 ++ read -r source fs size use avail _ mount 00:05:13.362 ++ mounts["$mount"]=/dev/vda1 00:05:13.362 ++ fss["$mount"]=ext4 00:05:13.362 ++ avails["$mount"]=10288529408 00:05:13.362 ++ sizes["$mount"]=19681529856 00:05:13.362 ++ uses["$mount"]=9376223232 00:05:13.362 ++ read -r source fs size use avail _ mount 00:05:13.362 ++ mounts["$mount"]=tmpfs 00:05:13.362 ++ fss["$mount"]=tmpfs 00:05:13.362 ++ avails["$mount"]=6270111744 00:05:13.362 ++ sizes["$mount"]=6270111744 00:05:13.362 ++ uses["$mount"]=0 00:05:13.362 ++ read -r source fs size use avail _ mount 00:05:13.362 ++ mounts["$mount"]=tmpfs 00:05:13.362 ++ fss["$mount"]=tmpfs 00:05:13.362 ++ avails["$mount"]=5242880 00:05:13.362 ++ sizes["$mount"]=5242880 00:05:13.362 ++ uses["$mount"]=0 00:05:13.362 ++ read -r source fs size use avail _ mount 00:05:13.362 ++ mounts["$mount"]=/dev/vda16 00:05:13.362 ++ fss["$mount"]=ext4 00:05:13.362 ++ avails["$mount"]=777306112 00:05:13.362 ++ sizes["$mount"]=923156480 00:05:13.362 ++ uses["$mount"]=81207296 00:05:13.362 ++ read -r source fs size use avail _ mount 00:05:13.362 ++ mounts["$mount"]=/dev/vda15 00:05:13.362 ++ fss["$mount"]=vfat 00:05:13.362 ++ avails["$mount"]=103000064 00:05:13.362 ++ sizes["$mount"]=109395968 00:05:13.362 ++ uses["$mount"]=6395904 00:05:13.362 ++ read -r source fs size use avail _ mount 00:05:13.362 ++ mounts["$mount"]=tmpfs 00:05:13.362 ++ fss["$mount"]=tmpfs 00:05:13.362 ++ avails["$mount"]=1254006784 00:05:13.362 ++ sizes["$mount"]=1254019072 00:05:13.362 ++ uses["$mount"]=12288 00:05:13.362 ++ read -r source fs size use avail _ mount 00:05:13.362 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:05:13.362 ++ fss["$mount"]=fuse.sshfs 00:05:13.362 ++ avails["$mount"]=98062471168 00:05:13.362 ++ sizes["$mount"]=105088212992 00:05:13.362 ++ uses["$mount"]=1640308736 00:05:13.362 ++ read -r source fs size use avail _ mount 00:05:13.362 ++ printf '* Looking for test storage...\n' 00:05:13.362 * Looking for test storage... 00:05:13.362 ++ local target_space new_size 00:05:13.362 ++ for target_dir in "${storage_candidates[@]}" 00:05:13.362 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:13.362 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:13.362 ++ mount=/ 00:05:13.362 ++ target_space=10288529408 00:05:13.362 ++ (( target_space == 0 || target_space < requested_size )) 00:05:13.362 ++ (( target_space >= requested_size )) 00:05:13.362 ++ [[ ext4 == tmpfs ]] 00:05:13.362 ++ [[ ext4 == ramfs ]] 00:05:13.362 ++ [[ / == / ]] 00:05:13.362 ++ new_size=11590815744 00:05:13.362 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:13.362 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:13.362 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:13.362 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:13.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:13.362 ++ return 0 00:05:13.362 ++ set -o errtrace 00:05:13.362 ++ shopt -s extdebug 00:05:13.362 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:13.362 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:13.362 05:02:32 -- common/autotest_common.sh@1672 -- # true 00:05:13.362 05:02:32 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:05:13.362 05:02:32 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:13.362 05:02:32 -- common/autotest_common.sh@29 -- # exec 00:05:13.362 05:02:32 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:13.362 05:02:32 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:13.362 05:02:32 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:13.362 05:02:32 -- common/autotest_common.sh@18 -- # set -x 00:05:13.362 05:02:32 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:13.362 05:02:32 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:13.362 05:02:32 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:13.362 05:02:32 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:13.362 05:02:32 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:13.362 05:02:32 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:05:13.362 05:02:32 -- unit/unittest.sh@179 -- # hash lcov 00:05:13.362 05:02:32 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:13.362 05:02:32 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:13.362 05:02:32 -- unit/unittest.sh@180 -- # cov_avail=yes 00:05:13.362 05:02:32 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:05:13.362 05:02:32 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:13.362 05:02:32 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:13.362 05:02:32 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:13.362 05:02:32 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:05:13.362 --rc lcov_branch_coverage=1 00:05:13.362 --rc lcov_function_coverage=1 00:05:13.362 --rc genhtml_branch_coverage=1 00:05:13.362 --rc genhtml_function_coverage=1 00:05:13.362 --rc genhtml_legend=1 00:05:13.362 --rc geninfo_all_blocks=1 00:05:13.362 ' 00:05:13.362 05:02:32 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:05:13.362 --rc lcov_branch_coverage=1 00:05:13.362 --rc lcov_function_coverage=1 00:05:13.362 --rc genhtml_branch_coverage=1 00:05:13.362 --rc genhtml_function_coverage=1 00:05:13.362 --rc genhtml_legend=1 00:05:13.362 --rc geninfo_all_blocks=1 00:05:13.362 ' 00:05:13.362 05:02:32 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:05:13.362 --rc lcov_branch_coverage=1 00:05:13.362 --rc lcov_function_coverage=1 00:05:13.362 --rc genhtml_branch_coverage=1 00:05:13.362 --rc genhtml_function_coverage=1 00:05:13.362 --rc genhtml_legend=1 00:05:13.362 --rc geninfo_all_blocks=1 00:05:13.362 --no-external' 00:05:13.362 05:02:32 -- unit/unittest.sh@200 -- # LCOV='lcov 00:05:13.362 --rc lcov_branch_coverage=1 00:05:13.362 --rc lcov_function_coverage=1 00:05:13.362 --rc genhtml_branch_coverage=1 00:05:13.362 --rc genhtml_function_coverage=1 00:05:13.362 --rc genhtml_legend=1 00:05:13.362 --rc geninfo_all_blocks=1 00:05:13.362 --no-external' 00:05:13.362 05:02:32 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:25.602 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:25.602 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:25.602 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:25.602 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:25.602 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:25.602 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:57.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:57.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:57.683 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:57.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:57.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:57.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:57.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:57.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:57.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:57.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:57.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:57.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:57.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:57.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:05.803 05:03:24 -- unit/unittest.sh@206 -- # uname -m 00:06:05.803 05:03:24 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:06:05.803 05:03:24 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:05.803 05:03:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.803 05:03:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.803 05:03:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.803 ************************************ 00:06:05.803 START TEST unittest_pci_event 00:06:05.803 ************************************ 00:06:05.803 05:03:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:05.803 00:06:05.803 00:06:05.803 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.803 http://cunit.sourceforge.net/ 00:06:05.803 00:06:05.803 00:06:05.803 Suite: pci_event 00:06:05.803 Test: test_pci_parse_event ...[2024-07-26 05:03:24.493867] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:05.803 [2024-07-26 05:03:24.494303] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:05.803 passed 00:06:05.803 00:06:05.803 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.803 suites 1 1 n/a 0 0 00:06:05.803 tests 1 1 1 0 0 00:06:05.803 asserts 15 15 15 0 n/a 00:06:05.803 00:06:05.803 Elapsed time = 0.001 seconds 00:06:05.803 00:06:05.803 real 0m0.035s 00:06:05.803 user 0m0.013s 00:06:05.803 sys 0m0.016s 00:06:05.803 05:03:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.803 05:03:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.803 ************************************ 00:06:05.803 END TEST unittest_pci_event 00:06:05.803 ************************************ 00:06:05.803 05:03:24 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:05.803 05:03:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.803 05:03:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.803 05:03:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.803 ************************************ 00:06:05.803 START TEST unittest_include 00:06:05.803 ************************************ 00:06:05.803 05:03:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:05.803 00:06:05.803 00:06:05.803 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.803 http://cunit.sourceforge.net/ 00:06:05.803 00:06:05.803 00:06:05.803 Suite: histogram 00:06:05.803 Test: histogram_test ...passed 00:06:05.803 Test: histogram_merge ...passed 00:06:05.803 00:06:05.803 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.803 suites 1 1 n/a 0 0 00:06:05.803 tests 2 2 2 0 0 00:06:05.803 asserts 50 50 50 0 n/a 00:06:05.803 00:06:05.803 Elapsed time = 0.007 seconds 00:06:05.803 00:06:05.803 real 0m0.034s 00:06:05.803 user 0m0.023s 00:06:05.803 sys 0m0.011s 00:06:05.803 05:03:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.803 05:03:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.803 ************************************ 00:06:05.803 END TEST unittest_include 00:06:05.803 ************************************ 00:06:05.803 05:03:24 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:06:05.803 05:03:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.803 05:03:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.803 05:03:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.803 ************************************ 00:06:05.803 START TEST unittest_bdev 00:06:05.803 ************************************ 00:06:05.803 05:03:24 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:06:05.803 05:03:24 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:05.803 00:06:05.803 00:06:05.803 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.803 http://cunit.sourceforge.net/ 00:06:05.803 00:06:05.803 00:06:05.803 Suite: bdev 00:06:05.803 Test: bytes_to_blocks_test ...passed 00:06:05.803 Test: num_blocks_test ...passed 00:06:05.803 Test: io_valid_test ...passed 00:06:05.804 Test: open_write_test ...[2024-07-26 05:03:24.697633] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:05.804 [2024-07-26 05:03:24.697988] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:05.804 [2024-07-26 05:03:24.698122] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:05.804 passed 00:06:05.804 Test: claim_test ...passed 00:06:05.804 Test: alias_add_del_test ...[2024-07-26 05:03:24.761460] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:05.804 [2024-07-26 05:03:24.761554] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:05.804 [2024-07-26 05:03:24.761604] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:05.804 passed 00:06:05.804 Test: get_device_stat_test ...passed 00:06:05.804 Test: bdev_io_types_test ...passed 00:06:05.804 Test: bdev_io_wait_test ...passed 00:06:05.804 Test: bdev_io_spans_split_test ...passed 00:06:05.804 Test: bdev_io_boundary_split_test ...passed 00:06:05.804 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-26 05:03:24.880981] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:05.804 passed 00:06:06.062 Test: bdev_io_mix_split_test ...passed 00:06:06.062 Test: bdev_io_split_with_io_wait ...passed 00:06:06.062 Test: bdev_io_write_unit_split_test ...[2024-07-26 05:03:24.970579] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:06.062 [2024-07-26 05:03:24.970681] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:06.062 [2024-07-26 05:03:24.970738] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:06.062 [2024-07-26 05:03:24.970792] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:06.062 passed 00:06:06.062 Test: bdev_io_alignment_with_boundary ...passed 00:06:06.062 Test: bdev_io_alignment ...passed 00:06:06.062 Test: bdev_histograms ...passed 00:06:06.062 Test: bdev_write_zeroes ...passed 00:06:06.062 Test: bdev_compare_and_write ...passed 00:06:06.062 Test: bdev_compare ...passed 00:06:06.321 Test: bdev_compare_emulated ...passed 00:06:06.321 Test: bdev_zcopy_write ...passed 00:06:06.321 Test: bdev_zcopy_read ...passed 00:06:06.321 Test: bdev_open_while_hotremove ...passed 00:06:06.321 Test: bdev_close_while_hotremove ...passed 00:06:06.321 Test: bdev_open_ext_test ...passed 00:06:06.321 Test: bdev_open_ext_unregister ...[2024-07-26 05:03:25.216532] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:06.321 [2024-07-26 05:03:25.216700] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:06.321 passed 00:06:06.321 Test: bdev_set_io_timeout ...passed 00:06:06.321 Test: bdev_set_qd_sampling ...passed 00:06:06.321 Test: lba_range_overlap ...passed 00:06:06.321 Test: lock_lba_range_check_ranges ...passed 00:06:06.321 Test: lock_lba_range_with_io_outstanding ...passed 00:06:06.321 Test: lock_lba_range_overlapped ...passed 00:06:06.321 Test: bdev_quiesce ...[2024-07-26 05:03:25.322030] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:06.321 passed 00:06:06.321 Test: bdev_io_abort ...passed 00:06:06.321 Test: bdev_unmap ...passed 00:06:06.321 Test: bdev_write_zeroes_split_test ...passed 00:06:06.321 Test: bdev_set_options_test ...passed 00:06:06.321 Test: bdev_get_memory_domains ...passed 00:06:06.321 Test: bdev_io_ext ...[2024-07-26 05:03:25.394942] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:06.321 passed 00:06:06.580 Test: bdev_io_ext_no_opts ...passed 00:06:06.580 Test: bdev_io_ext_invalid_opts ...passed 00:06:06.580 Test: bdev_io_ext_split ...passed 00:06:06.580 Test: bdev_io_ext_bounce_buffer ...passed 00:06:06.580 Test: bdev_register_uuid_alias ...[2024-07-26 05:03:25.497294] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name a0f1db7e-0c1c-4ce9-9774-f239d2885315 already exists 00:06:06.580 [2024-07-26 05:03:25.497381] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:a0f1db7e-0c1c-4ce9-9774-f239d2885315 alias for bdev bdev0 00:06:06.580 passed 00:06:06.580 Test: bdev_unregister_by_name ...[2024-07-26 05:03:25.514730] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:06.580 [2024-07-26 05:03:25.514787] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:06.580 passed 00:06:06.580 Test: for_each_bdev_test ...passed 00:06:06.580 Test: bdev_seek_test ...passed 00:06:06.580 Test: bdev_copy ...passed 00:06:06.580 Test: bdev_copy_split_test ...passed 00:06:06.580 Test: examine_locks ...passed 00:06:06.580 Test: claim_v2_rwo ...[2024-07-26 05:03:25.588129] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:06.580 passed 00:06:06.580 Test: claim_v2_rom ...[2024-07-26 05:03:25.588230] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.588269] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.588283] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.588303] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.588334] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:06.581 [2024-07-26 05:03:25.588459] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.588493] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:06.581 passed 00:06:06.581 Test: claim_v2_rwm ...[2024-07-26 05:03:25.588510] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.588522] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.588558] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:06.581 [2024-07-26 05:03:25.588578] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:06.581 [2024-07-26 05:03:25.588672] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:06.581 [2024-07-26 05:03:25.588695] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.588718] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:06.581 passed 00:06:06.581 Test: claim_v2_existing_writer ...[2024-07-26 05:03:25.588731] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.588768] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.588781] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.588812] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:06.581 [2024-07-26 05:03:25.588924] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:06.581 [2024-07-26 05:03:25.588944] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:06.581 passed 00:06:06.581 Test: claim_v2_existing_v1 ...passed 00:06:06.581 Test: claim_v1_existing_v2 ...[2024-07-26 05:03:25.589109] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.589135] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.589148] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:06.581 passed 00:06:06.581 Test: examine_claimed ...[2024-07-26 05:03:25.589248] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.589283] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.589321] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:06.581 [2024-07-26 05:03:25.589649] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:06.581 passed 00:06:06.581 00:06:06.581 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.581 suites 1 1 n/a 0 0 00:06:06.581 tests 59 59 59 0 0 00:06:06.581 asserts 4599 4599 4599 0 n/a 00:06:06.581 00:06:06.581 Elapsed time = 0.932 seconds 00:06:06.581 05:03:25 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:06.581 00:06:06.581 00:06:06.581 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.581 http://cunit.sourceforge.net/ 00:06:06.581 00:06:06.581 00:06:06.581 Suite: nvme 00:06:06.581 Test: test_create_ctrlr ...passed 00:06:06.581 Test: test_reset_ctrlr ...[2024-07-26 05:03:25.633544] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 passed 00:06:06.581 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:06.581 Test: test_failover_ctrlr ...passed 00:06:06.581 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-26 05:03:25.636299] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 [2024-07-26 05:03:25.636530] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 [2024-07-26 05:03:25.636751] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 passed 00:06:06.581 Test: test_pending_reset ...[2024-07-26 05:03:25.638466] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 [2024-07-26 05:03:25.638720] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 passed 00:06:06.581 Test: test_attach_ctrlr ...[2024-07-26 05:03:25.639918] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:06.581 passed 00:06:06.581 Test: test_aer_cb ...passed 00:06:06.581 Test: test_submit_nvme_cmd ...passed 00:06:06.581 Test: test_add_remove_trid ...passed 00:06:06.581 Test: test_abort ...[2024-07-26 05:03:25.643273] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:06.581 passed 00:06:06.581 Test: test_get_io_qpair ...passed 00:06:06.581 Test: test_bdev_unregister ...passed 00:06:06.581 Test: test_compare_ns ...passed 00:06:06.581 Test: test_init_ana_log_page ...passed 00:06:06.581 Test: test_get_memory_domains ...passed 00:06:06.581 Test: test_reconnect_qpair ...[2024-07-26 05:03:25.646161] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 passed 00:06:06.581 Test: test_create_bdev_ctrlr ...[2024-07-26 05:03:25.646706] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:06.581 passed 00:06:06.581 Test: test_add_multi_ns_to_bdev ...[2024-07-26 05:03:25.648015] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:06.581 passed 00:06:06.581 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:06.581 Test: test_admin_path ...passed 00:06:06.581 Test: test_reset_bdev_ctrlr ...passed 00:06:06.581 Test: test_find_io_path ...passed 00:06:06.581 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:06.581 Test: test_retry_io_for_io_path_error ...passed 00:06:06.581 Test: test_retry_io_count ...passed 00:06:06.581 Test: test_concurrent_read_ana_log_page ...passed 00:06:06.581 Test: test_retry_io_for_ana_error ...passed 00:06:06.581 Test: test_check_io_error_resiliency_params ...[2024-07-26 05:03:25.654995] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:06.581 [2024-07-26 05:03:25.655069] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:06.581 passed 00:06:06.581 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-26 05:03:25.655089] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:06.581 [2024-07-26 05:03:25.655105] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:06.581 [2024-07-26 05:03:25.655119] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:06.581 [2024-07-26 05:03:25.655138] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:06.581 [2024-07-26 05:03:25.655151] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:06.581 [2024-07-26 05:03:25.655166] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:06.581 [2024-07-26 05:03:25.655182] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:06.581 passed 00:06:06.581 Test: test_reconnect_ctrlr ...[2024-07-26 05:03:25.655980] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 [2024-07-26 05:03:25.656135] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 [2024-07-26 05:03:25.656403] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 [2024-07-26 05:03:25.656514] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 [2024-07-26 05:03:25.656617] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.581 passed 00:06:06.582 Test: test_retry_failover_ctrlr ...[2024-07-26 05:03:25.656962] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.582 passed 00:06:06.582 Test: test_fail_path ...[2024-07-26 05:03:25.657611] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.582 [2024-07-26 05:03:25.657764] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.582 [2024-07-26 05:03:25.657913] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.582 [2024-07-26 05:03:25.658050] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.582 [2024-07-26 05:03:25.658151] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.582 passed 00:06:06.582 Test: test_nvme_ns_cmp ...passed 00:06:06.582 Test: test_ana_transition ...passed 00:06:06.582 Test: test_set_preferred_path ...passed 00:06:06.582 Test: test_find_next_io_path ...passed 00:06:06.582 Test: test_find_io_path_min_qd ...passed 00:06:06.582 Test: test_disable_auto_failback ...[2024-07-26 05:03:25.659893] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.582 passed 00:06:06.582 Test: test_set_multipath_policy ...passed 00:06:06.582 Test: test_uuid_generation ...passed 00:06:06.582 Test: test_retry_io_to_same_path ...passed 00:06:06.582 Test: test_race_between_reset_and_disconnected ...passed 00:06:06.582 Test: test_ctrlr_op_rpc ...passed 00:06:06.582 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:06.582 Test: test_disable_enable_ctrlr ...[2024-07-26 05:03:25.663676] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.582 [2024-07-26 05:03:25.663850] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:06.582 passed 00:06:06.582 Test: test_delete_ctrlr_done ...passed 00:06:06.582 Test: test_ns_remove_during_reset ...passed 00:06:06.582 00:06:06.582 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.582 suites 1 1 n/a 0 0 00:06:06.582 tests 48 48 48 0 0 00:06:06.582 asserts 3553 3553 3553 0 n/a 00:06:06.582 00:06:06.582 Elapsed time = 0.033 seconds 00:06:06.582 05:03:25 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:06.840 Test Options 00:06:06.840 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:06.840 00:06:06.841 00:06:06.841 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.841 http://cunit.sourceforge.net/ 00:06:06.841 00:06:06.841 00:06:06.841 Suite: raid 00:06:06.841 Test: test_create_raid ...passed 00:06:06.841 Test: test_create_raid_superblock ...passed 00:06:06.841 Test: test_delete_raid ...passed 00:06:06.841 Test: test_create_raid_invalid_args ...[2024-07-26 05:03:25.709483] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:06.841 [2024-07-26 05:03:25.709985] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:06.841 [2024-07-26 05:03:25.710735] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:06.841 [2024-07-26 05:03:25.711034] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:06.841 [2024-07-26 05:03:25.711916] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:06.841 passed 00:06:06.841 Test: test_delete_raid_invalid_args ...passed 00:06:06.841 Test: test_io_channel ...passed 00:06:06.841 Test: test_reset_io ...passed 00:06:06.841 Test: test_write_io ...passed 00:06:06.841 Test: test_read_io ...passed 00:06:07.409 Test: test_unmap_io ...passed 00:06:07.409 Test: test_io_failure ...[2024-07-26 05:03:26.238927] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:07.409 passed 00:06:07.409 Test: test_multi_raid_no_io ...passed 00:06:07.409 Test: test_multi_raid_with_io ...passed 00:06:07.409 Test: test_io_type_supported ...passed 00:06:07.409 Test: test_raid_json_dump_info ...passed 00:06:07.409 Test: test_context_size ...passed 00:06:07.409 Test: test_raid_level_conversions ...passed 00:06:07.409 Test: test_raid_process ...passed 00:06:07.409 Test: test_raid_io_split ...passed 00:06:07.409 00:06:07.409 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.409 suites 1 1 n/a 0 0 00:06:07.409 tests 19 19 19 0 0 00:06:07.409 asserts 177879 177879 177879 0 n/a 00:06:07.409 00:06:07.409 Elapsed time = 0.540 seconds 00:06:07.409 05:03:26 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:07.409 00:06:07.409 00:06:07.409 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.409 http://cunit.sourceforge.net/ 00:06:07.409 00:06:07.409 00:06:07.409 Suite: raid_sb 00:06:07.409 Test: test_raid_bdev_write_superblock ...passed 00:06:07.409 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:07.409 Test: test_raid_bdev_parse_superblock ...[2024-07-26 05:03:26.288357] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:07.409 passed 00:06:07.409 00:06:07.409 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.409 suites 1 1 n/a 0 0 00:06:07.409 tests 3 3 3 0 0 00:06:07.409 asserts 32 32 32 0 n/a 00:06:07.409 00:06:07.409 Elapsed time = 0.002 seconds 00:06:07.409 05:03:26 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:07.409 00:06:07.409 00:06:07.409 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.409 http://cunit.sourceforge.net/ 00:06:07.409 00:06:07.409 00:06:07.409 Suite: concat 00:06:07.409 Test: test_concat_start ...passed 00:06:07.409 Test: test_concat_rw ...passed 00:06:07.409 Test: test_concat_null_payload ...passed 00:06:07.409 00:06:07.409 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.409 suites 1 1 n/a 0 0 00:06:07.409 tests 3 3 3 0 0 00:06:07.409 asserts 8097 8097 8097 0 n/a 00:06:07.409 00:06:07.409 Elapsed time = 0.009 seconds 00:06:07.409 05:03:26 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:07.409 00:06:07.409 00:06:07.409 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.409 http://cunit.sourceforge.net/ 00:06:07.409 00:06:07.409 00:06:07.409 Suite: raid1 00:06:07.409 Test: test_raid1_start ...passed 00:06:07.409 Test: test_raid1_read_balancing ...passed 00:06:07.409 00:06:07.409 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.409 suites 1 1 n/a 0 0 00:06:07.409 tests 2 2 2 0 0 00:06:07.409 asserts 2856 2856 2856 0 n/a 00:06:07.409 00:06:07.409 Elapsed time = 0.005 seconds 00:06:07.409 05:03:26 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:07.409 00:06:07.409 00:06:07.409 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.409 http://cunit.sourceforge.net/ 00:06:07.409 00:06:07.409 00:06:07.409 Suite: zone 00:06:07.409 Test: test_zone_get_operation ...passed 00:06:07.409 Test: test_bdev_zone_get_info ...passed 00:06:07.409 Test: test_bdev_zone_management ...passed 00:06:07.409 Test: test_bdev_zone_append ...passed 00:06:07.409 Test: test_bdev_zone_append_with_md ...passed 00:06:07.409 Test: test_bdev_zone_appendv ...passed 00:06:07.409 Test: test_bdev_zone_appendv_with_md ...passed 00:06:07.409 Test: test_bdev_io_get_append_location ...passed 00:06:07.409 00:06:07.409 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.409 suites 1 1 n/a 0 0 00:06:07.409 tests 8 8 8 0 0 00:06:07.409 asserts 94 94 94 0 n/a 00:06:07.409 00:06:07.409 Elapsed time = 0.001 seconds 00:06:07.409 05:03:26 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:07.409 00:06:07.409 00:06:07.409 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.409 http://cunit.sourceforge.net/ 00:06:07.409 00:06:07.409 00:06:07.409 Suite: gpt_parse 00:06:07.409 Test: test_parse_mbr_and_primary ...[2024-07-26 05:03:26.426996] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:07.409 [2024-07-26 05:03:26.427175] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:07.409 [2024-07-26 05:03:26.427279] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:07.409 [2024-07-26 05:03:26.427303] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:07.409 [2024-07-26 05:03:26.427355] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:07.409 [2024-07-26 05:03:26.427393] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:07.409 passed 00:06:07.409 Test: test_parse_secondary ...[2024-07-26 05:03:26.427961] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:07.409 [2024-07-26 05:03:26.427983] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:07.409 [2024-07-26 05:03:26.428037] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:07.409 [2024-07-26 05:03:26.428076] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:07.409 passed 00:06:07.409 Test: test_check_mbr ...[2024-07-26 05:03:26.428668] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:07.409 [2024-07-26 05:03:26.428695] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:07.409 passed 00:06:07.409 Test: test_read_header ...passed 00:06:07.409 Test: test_read_partitions ...[2024-07-26 05:03:26.428773] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:07.409 [2024-07-26 05:03:26.428815] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:07.409 [2024-07-26 05:03:26.428858] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:07.409 [2024-07-26 05:03:26.428879] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:07.409 [2024-07-26 05:03:26.428906] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:07.409 [2024-07-26 05:03:26.428925] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:07.409 [2024-07-26 05:03:26.428991] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:07.409 [2024-07-26 05:03:26.429028] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:07.409 [2024-07-26 05:03:26.429052] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:07.409 [2024-07-26 05:03:26.429072] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:07.409 [2024-07-26 05:03:26.429352] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:07.409 passed 00:06:07.409 00:06:07.409 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.409 suites 1 1 n/a 0 0 00:06:07.409 tests 5 5 5 0 0 00:06:07.410 asserts 33 33 33 0 n/a 00:06:07.410 00:06:07.410 Elapsed time = 0.003 seconds 00:06:07.410 05:03:26 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:07.410 00:06:07.410 00:06:07.410 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.410 http://cunit.sourceforge.net/ 00:06:07.410 00:06:07.410 00:06:07.410 Suite: bdev_part 00:06:07.410 Test: part_test ...[2024-07-26 05:03:26.466541] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:07.410 passed 00:06:07.410 Test: part_free_test ...passed 00:06:07.410 Test: part_get_io_channel_test ...passed 00:06:07.410 Test: part_construct_ext ...passed 00:06:07.410 00:06:07.410 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.410 suites 1 1 n/a 0 0 00:06:07.410 tests 4 4 4 0 0 00:06:07.410 asserts 48 48 48 0 n/a 00:06:07.410 00:06:07.410 Elapsed time = 0.041 seconds 00:06:07.669 05:03:26 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:07.669 00:06:07.669 00:06:07.669 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.669 http://cunit.sourceforge.net/ 00:06:07.669 00:06:07.669 00:06:07.669 Suite: scsi_nvme_suite 00:06:07.669 Test: scsi_nvme_translate_test ...passed 00:06:07.669 00:06:07.669 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.669 suites 1 1 n/a 0 0 00:06:07.669 tests 1 1 1 0 0 00:06:07.669 asserts 104 104 104 0 n/a 00:06:07.669 00:06:07.669 Elapsed time = 0.000 seconds 00:06:07.669 05:03:26 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:07.669 00:06:07.669 00:06:07.669 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.669 http://cunit.sourceforge.net/ 00:06:07.669 00:06:07.669 00:06:07.669 Suite: lvol 00:06:07.669 Test: ut_lvs_init ...passed 00:06:07.669 Test: ut_lvol_init ...[2024-07-26 05:03:26.569403] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:07.669 [2024-07-26 05:03:26.569719] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:07.669 passed 00:06:07.669 Test: ut_lvol_snapshot ...passed 00:06:07.669 Test: ut_lvol_clone ...passed 00:06:07.669 Test: ut_lvs_destroy ...passed 00:06:07.669 Test: ut_lvs_unload ...passed 00:06:07.669 Test: ut_lvol_resize ...passed 00:06:07.669 Test: ut_lvol_set_read_only ...[2024-07-26 05:03:26.571208] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:07.669 passed 00:06:07.669 Test: ut_lvol_hotremove ...passed 00:06:07.669 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:07.669 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:07.669 Test: ut_lvol_read_write ...passed 00:06:07.669 Test: ut_vbdev_lvol_submit_request ...passed 00:06:07.669 Test: ut_lvol_examine_config ...passed 00:06:07.669 Test: ut_lvol_examine_disk ...[2024-07-26 05:03:26.571790] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:07.669 passed 00:06:07.669 Test: ut_lvol_rename ...passed 00:06:07.669 Test: ut_bdev_finish ...passed[2024-07-26 05:03:26.572696] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:07.669 [2024-07-26 05:03:26.572751] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:07.669 00:06:07.669 Test: ut_lvs_rename ...passed 00:06:07.669 Test: ut_lvol_seek ...passed 00:06:07.669 Test: ut_esnap_dev_create ...[2024-07-26 05:03:26.573360] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:07.669 [2024-07-26 05:03:26.573403] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:07.669 [2024-07-26 05:03:26.573464] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:07.669 passed 00:06:07.669 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-26 05:03:26.573512] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:07.669 [2024-07-26 05:03:26.573602] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:07.669 [2024-07-26 05:03:26.573630] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:07.669 passed 00:06:07.669 00:06:07.669 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.669 suites 1 1 n/a 0 0 00:06:07.669 tests 21 21 21 0 0 00:06:07.669 asserts 712 712 712 0 n/a 00:06:07.669 00:06:07.669 Elapsed time = 0.005 seconds 00:06:07.669 05:03:26 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:07.669 00:06:07.669 00:06:07.669 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.669 http://cunit.sourceforge.net/ 00:06:07.669 00:06:07.669 00:06:07.669 Suite: zone_block 00:06:07.669 Test: test_zone_block_create ...passed 00:06:07.669 Test: test_zone_block_create_invalid ...[2024-07-26 05:03:26.621958] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:07.669 [2024-07-26 05:03:26.622214] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-26 05:03:26.622333] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:07.669 [2024-07-26 05:03:26.622389] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-26 05:03:26.622546] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:07.669 [2024-07-26 05:03:26.622573] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:07.670 Test: test_get_zone_info ...[2024-07-26 05:03:26.622640] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:07.670 [2024-07-26 05:03:26.622662] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-26 05:03:26.623360] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.623419] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.623471] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 passed 00:06:07.670 Test: test_supported_io_types ...passed 00:06:07.670 Test: test_reset_zone ...[2024-07-26 05:03:26.624150] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.624198] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 passed 00:06:07.670 Test: test_open_zone ...[2024-07-26 05:03:26.624531] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.625144] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.625222] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 passed 00:06:07.670 Test: test_zone_write ...[2024-07-26 05:03:26.625688] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:07.670 [2024-07-26 05:03:26.625721] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.625791] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:07.670 [2024-07-26 05:03:26.625809] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.631098] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:07.670 [2024-07-26 05:03:26.631157] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.631219] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:07.670 [2024-07-26 05:03:26.631247] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.636146] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:07.670 [2024-07-26 05:03:26.636202] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 passed 00:06:07.670 Test: test_zone_read ...[2024-07-26 05:03:26.636596] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:07.670 [2024-07-26 05:03:26.636631] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.636686] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:07.670 [2024-07-26 05:03:26.636709] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.637098] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:07.670 [2024-07-26 05:03:26.637136] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 passed 00:06:07.670 Test: test_close_zone ...[2024-07-26 05:03:26.637368] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.637475] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.637650] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.637686] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 passed 00:06:07.670 Test: test_finish_zone ...[2024-07-26 05:03:26.638140] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.638192] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 passed 00:06:07.670 Test: test_append_zone ...[2024-07-26 05:03:26.638489] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:07.670 [2024-07-26 05:03:26.638524] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.638568] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:07.670 [2024-07-26 05:03:26.638586] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 [2024-07-26 05:03:26.648244] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:07.670 [2024-07-26 05:03:26.648283] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.670 passed 00:06:07.670 00:06:07.670 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.670 suites 1 1 n/a 0 0 00:06:07.670 tests 11 11 11 0 0 00:06:07.670 asserts 3437 3437 3437 0 n/a 00:06:07.670 00:06:07.670 Elapsed time = 0.027 seconds 00:06:07.670 05:03:26 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:07.670 00:06:07.670 00:06:07.670 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.670 http://cunit.sourceforge.net/ 00:06:07.670 00:06:07.670 00:06:07.670 Suite: bdev 00:06:07.670 Test: basic ...[2024-07-26 05:03:26.712278] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5be43729cec1): Operation not permitted (rc=-1) 00:06:07.670 [2024-07-26 05:03:26.712592] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x5130000003c0 (0x5be43729ce80): Operation not permitted (rc=-1) 00:06:07.670 [2024-07-26 05:03:26.712649] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5be43729cec1): Operation not permitted (rc=-1) 00:06:07.670 passed 00:06:07.670 Test: unregister_and_close ...passed 00:06:07.929 Test: unregister_and_close_different_threads ...passed 00:06:07.929 Test: basic_qos ...passed 00:06:07.929 Test: put_channel_during_reset ...passed 00:06:07.929 Test: aborted_reset ...passed 00:06:07.929 Test: aborted_reset_no_outstanding_io ...passed 00:06:07.929 Test: io_during_reset ...passed 00:06:07.929 Test: reset_completions ...passed 00:06:07.929 Test: io_during_qos_queue ...passed 00:06:08.188 Test: io_during_qos_reset ...passed 00:06:08.188 Test: enomem ...passed 00:06:08.188 Test: enomem_multi_bdev ...passed 00:06:08.188 Test: enomem_multi_bdev_unregister ...passed 00:06:08.188 Test: enomem_multi_io_target ...passed 00:06:08.188 Test: qos_dynamic_enable ...passed 00:06:08.188 Test: bdev_histograms_mt ...passed 00:06:08.188 Test: bdev_set_io_timeout_mt ...[2024-07-26 05:03:27.216664] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x5130000003c0 not unregistered 00:06:08.188 passed 00:06:08.188 Test: lock_lba_range_then_submit_io ...[2024-07-26 05:03:27.223359] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x5be43729ce40 already registered (old:0x5130000003c0 new:0x513000000c80) 00:06:08.188 passed 00:06:08.188 Test: unregister_during_reset ...passed 00:06:08.188 Test: event_notify_and_close ...passed 00:06:08.446 Test: unregister_and_qos_poller ...passed 00:06:08.446 Suite: bdev_wrong_thread 00:06:08.446 Test: spdk_bdev_register_wt ...[2024-07-26 05:03:27.323522] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x518000001480 (0x518000001480) 00:06:08.446 passed 00:06:08.446 Test: spdk_bdev_examine_wt ...[2024-07-26 05:03:27.323906] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x518000001480 (0x518000001480) 00:06:08.446 passed 00:06:08.446 00:06:08.446 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.446 suites 2 2 n/a 0 0 00:06:08.446 tests 24 24 24 0 0 00:06:08.446 asserts 621 621 621 0 n/a 00:06:08.446 00:06:08.446 Elapsed time = 0.624 seconds 00:06:08.446 00:06:08.446 real 0m2.718s 00:06:08.446 user 0m1.233s 00:06:08.446 sys 0m1.489s 00:06:08.446 05:03:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.446 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.446 ************************************ 00:06:08.446 END TEST unittest_bdev 00:06:08.446 ************************************ 00:06:08.446 05:03:27 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:08.446 05:03:27 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:08.446 05:03:27 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:08.446 05:03:27 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:08.446 05:03:27 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:08.446 05:03:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.446 05:03:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.446 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.446 ************************************ 00:06:08.446 START TEST unittest_bdev_raid5f 00:06:08.446 ************************************ 00:06:08.446 05:03:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:08.446 00:06:08.446 00:06:08.446 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.446 http://cunit.sourceforge.net/ 00:06:08.446 00:06:08.446 00:06:08.446 Suite: raid5f 00:06:08.446 Test: test_raid5f_start ...passed 00:06:09.012 Test: test_raid5f_submit_read_request ...passed 00:06:09.012 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:12.350 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:27.237 Test: test_raid5f_chunk_write_error ...passed 00:06:35.373 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:36.752 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:03.306 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:03.306 00:07:03.306 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.306 suites 1 1 n/a 0 0 00:07:03.306 tests 8 8 8 0 0 00:07:03.306 asserts 351864 351864 351864 0 n/a 00:07:03.306 00:07:03.306 Elapsed time = 52.828 seconds 00:07:03.306 00:07:03.306 real 0m52.921s 00:07:03.306 user 0m50.557s 00:07:03.306 sys 0m2.349s 00:07:03.306 05:04:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.306 ************************************ 00:07:03.306 END TEST unittest_bdev_raid5f 00:07:03.306 ************************************ 00:07:03.306 05:04:20 -- common/autotest_common.sh@10 -- # set +x 00:07:03.306 05:04:20 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:07:03.306 05:04:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:03.306 05:04:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.306 05:04:20 -- common/autotest_common.sh@10 -- # set +x 00:07:03.306 ************************************ 00:07:03.306 START TEST unittest_blob_blobfs 00:07:03.306 ************************************ 00:07:03.306 05:04:20 -- common/autotest_common.sh@1104 -- # unittest_blob 00:07:03.306 05:04:20 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:03.306 05:04:20 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:03.306 00:07:03.306 00:07:03.306 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.306 http://cunit.sourceforge.net/ 00:07:03.306 00:07:03.306 00:07:03.306 Suite: blob_nocopy_noextent 00:07:03.306 Test: blob_init ...[2024-07-26 05:04:20.414898] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:03.306 passed 00:07:03.306 Test: blob_thin_provision ...passed 00:07:03.306 Test: blob_read_only ...passed 00:07:03.306 Test: bs_load ...[2024-07-26 05:04:20.477623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:03.306 passed 00:07:03.306 Test: bs_load_custom_cluster_size ...passed 00:07:03.306 Test: bs_load_after_failed_grow ...passed 00:07:03.306 Test: bs_cluster_sz ...[2024-07-26 05:04:20.496550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:03.306 [2024-07-26 05:04:20.496926] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:03.306 [2024-07-26 05:04:20.496996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:03.306 passed 00:07:03.306 Test: bs_resize_md ...passed 00:07:03.306 Test: bs_destroy ...passed 00:07:03.306 Test: bs_type ...passed 00:07:03.306 Test: bs_super_block ...passed 00:07:03.306 Test: bs_test_recover_cluster_count ...passed 00:07:03.306 Test: bs_grow_live ...passed 00:07:03.306 Test: bs_grow_live_no_space ...passed 00:07:03.306 Test: bs_test_grow ...passed 00:07:03.306 Test: blob_serialize_test ...passed 00:07:03.306 Test: super_block_crc ...passed 00:07:03.306 Test: blob_thin_prov_write_count_io ...passed 00:07:03.306 Test: bs_load_iter_test ...passed 00:07:03.306 Test: blob_relations ...[2024-07-26 05:04:20.614377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.306 [2024-07-26 05:04:20.614502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.306 passed 00:07:03.306 Test: blob_relations2 ...[2024-07-26 05:04:20.615557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.306 [2024-07-26 05:04:20.615612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.306 [2024-07-26 05:04:20.626904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.306 [2024-07-26 05:04:20.626973] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.306 [2024-07-26 05:04:20.627066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.306 [2024-07-26 05:04:20.627085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.306 [2024-07-26 05:04:20.628700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.306 [2024-07-26 05:04:20.628779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.306 [2024-07-26 05:04:20.629293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.306 [2024-07-26 05:04:20.629342] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.306 passed 00:07:03.306 Test: blob_relations3 ...passed 00:07:03.306 Test: blobstore_clean_power_failure ...passed 00:07:03.306 Test: blob_delete_snapshot_power_failure ...[2024-07-26 05:04:20.725035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:03.306 [2024-07-26 05:04:20.733241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:03.306 [2024-07-26 05:04:20.733338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:03.306 [2024-07-26 05:04:20.733407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.306 [2024-07-26 05:04:20.741469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:03.306 [2024-07-26 05:04:20.741550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:03.306 [2024-07-26 05:04:20.741592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:03.306 [2024-07-26 05:04:20.741617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.306 [2024-07-26 05:04:20.749918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:03.306 [2024-07-26 05:04:20.750072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.306 [2024-07-26 05:04:20.758168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:03.306 [2024-07-26 05:04:20.758286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.306 [2024-07-26 05:04:20.766614] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:03.306 [2024-07-26 05:04:20.766728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.306 passed 00:07:03.306 Test: blob_create_snapshot_power_failure ...[2024-07-26 05:04:20.790400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:03.306 [2024-07-26 05:04:20.805595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:03.306 [2024-07-26 05:04:20.814089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:03.306 passed 00:07:03.306 Test: blob_io_unit ...passed 00:07:03.306 Test: blob_io_unit_compatibility ...passed 00:07:03.306 Test: blob_ext_md_pages ...passed 00:07:03.306 Test: blob_esnap_io_4096_4096 ...passed 00:07:03.306 Test: blob_esnap_io_512_512 ...passed 00:07:03.306 Test: blob_esnap_io_4096_512 ...passed 00:07:03.306 Test: blob_esnap_io_512_4096 ...passed 00:07:03.306 Suite: blob_bs_nocopy_noextent 00:07:03.306 Test: blob_open ...passed 00:07:03.307 Test: blob_create ...[2024-07-26 05:04:20.974248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:03.307 passed 00:07:03.307 Test: blob_create_loop ...passed 00:07:03.307 Test: blob_create_fail ...[2024-07-26 05:04:21.043886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:03.307 passed 00:07:03.307 Test: blob_create_internal ...passed 00:07:03.307 Test: blob_create_zero_extent ...passed 00:07:03.307 Test: blob_snapshot ...passed 00:07:03.307 Test: blob_clone ...passed 00:07:03.307 Test: blob_inflate ...[2024-07-26 05:04:21.154352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:03.307 passed 00:07:03.307 Test: blob_delete ...passed 00:07:03.307 Test: blob_resize_test ...[2024-07-26 05:04:21.193379] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:03.307 passed 00:07:03.307 Test: channel_ops ...passed 00:07:03.307 Test: blob_super ...passed 00:07:03.307 Test: blob_rw_verify_iov ...passed 00:07:03.307 Test: blob_unmap ...passed 00:07:03.307 Test: blob_iter ...passed 00:07:03.307 Test: blob_parse_md ...passed 00:07:03.307 Test: bs_load_pending_removal ...passed 00:07:03.307 Test: bs_unload ...[2024-07-26 05:04:21.349936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:03.307 passed 00:07:03.307 Test: bs_usable_clusters ...passed 00:07:03.307 Test: blob_crc ...[2024-07-26 05:04:21.389582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:03.307 [2024-07-26 05:04:21.389759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:03.307 passed 00:07:03.307 Test: blob_flags ...passed 00:07:03.307 Test: bs_version ...passed 00:07:03.307 Test: blob_set_xattrs_test ...[2024-07-26 05:04:21.450700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:03.307 [2024-07-26 05:04:21.450814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:03.307 passed 00:07:03.307 Test: blob_thin_prov_alloc ...passed 00:07:03.307 Test: blob_insert_cluster_msg_test ...passed 00:07:03.307 Test: blob_thin_prov_rw ...passed 00:07:03.307 Test: blob_thin_prov_rle ...passed 00:07:03.307 Test: blob_thin_prov_rw_iov ...passed 00:07:03.307 Test: blob_snapshot_rw ...passed 00:07:03.307 Test: blob_snapshot_rw_iov ...passed 00:07:03.307 Test: blob_inflate_rw ...passed 00:07:03.307 Test: blob_snapshot_freeze_io ...passed 00:07:03.307 Test: blob_operation_split_rw ...passed 00:07:03.307 Test: blob_operation_split_rw_iov ...passed 00:07:03.307 Test: blob_simultaneous_operations ...[2024-07-26 05:04:22.197471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:03.307 [2024-07-26 05:04:22.197568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.307 [2024-07-26 05:04:22.198725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:03.307 [2024-07-26 05:04:22.198776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.307 [2024-07-26 05:04:22.208665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:03.307 [2024-07-26 05:04:22.208723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.307 [2024-07-26 05:04:22.208858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:03.307 [2024-07-26 05:04:22.208880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.307 passed 00:07:03.307 Test: blob_persist_test ...passed 00:07:03.307 Test: blob_decouple_snapshot ...passed 00:07:03.307 Test: blob_seek_io_unit ...passed 00:07:03.307 Test: blob_nested_freezes ...passed 00:07:03.307 Suite: blob_blob_nocopy_noextent 00:07:03.307 Test: blob_write ...passed 00:07:03.307 Test: blob_read ...passed 00:07:03.307 Test: blob_rw_verify ...passed 00:07:03.307 Test: blob_rw_verify_iov_nomem ...passed 00:07:03.566 Test: blob_rw_iov_read_only ...passed 00:07:03.566 Test: blob_xattr ...passed 00:07:03.566 Test: blob_dirty_shutdown ...passed 00:07:03.566 Test: blob_is_degraded ...passed 00:07:03.566 Suite: blob_esnap_bs_nocopy_noextent 00:07:03.566 Test: blob_esnap_create ...passed 00:07:03.566 Test: blob_esnap_thread_add_remove ...passed 00:07:03.566 Test: blob_esnap_clone_snapshot ...passed 00:07:03.566 Test: blob_esnap_clone_inflate ...passed 00:07:03.566 Test: blob_esnap_clone_decouple ...passed 00:07:03.566 Test: blob_esnap_clone_reload ...passed 00:07:03.566 Test: blob_esnap_hotplug ...passed 00:07:03.566 Suite: blob_nocopy_extent 00:07:03.566 Test: blob_init ...[2024-07-26 05:04:22.631952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:03.566 passed 00:07:03.566 Test: blob_thin_provision ...passed 00:07:03.566 Test: blob_read_only ...passed 00:07:03.566 Test: bs_load ...[2024-07-26 05:04:22.661535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:03.566 passed 00:07:03.566 Test: bs_load_custom_cluster_size ...passed 00:07:03.826 Test: bs_load_after_failed_grow ...passed 00:07:03.826 Test: bs_cluster_sz ...[2024-07-26 05:04:22.681620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:03.826 [2024-07-26 05:04:22.681953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:03.826 [2024-07-26 05:04:22.682069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:03.826 passed 00:07:03.826 Test: bs_resize_md ...passed 00:07:03.826 Test: bs_destroy ...passed 00:07:03.826 Test: bs_type ...passed 00:07:03.826 Test: bs_super_block ...passed 00:07:03.826 Test: bs_test_recover_cluster_count ...passed 00:07:03.826 Test: bs_grow_live ...passed 00:07:03.826 Test: bs_grow_live_no_space ...passed 00:07:03.826 Test: bs_test_grow ...passed 00:07:03.826 Test: blob_serialize_test ...passed 00:07:03.826 Test: super_block_crc ...passed 00:07:03.826 Test: blob_thin_prov_write_count_io ...passed 00:07:03.826 Test: bs_load_iter_test ...passed 00:07:03.826 Test: blob_relations ...[2024-07-26 05:04:22.788110] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.826 [2024-07-26 05:04:22.788235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.826 [2024-07-26 05:04:22.789317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.826 [2024-07-26 05:04:22.789426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.826 passed 00:07:03.826 Test: blob_relations2 ...[2024-07-26 05:04:22.799105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.826 [2024-07-26 05:04:22.799180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.826 [2024-07-26 05:04:22.799223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.826 [2024-07-26 05:04:22.799237] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.826 [2024-07-26 05:04:22.800617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.826 [2024-07-26 05:04:22.800692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.826 [2024-07-26 05:04:22.801151] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.826 [2024-07-26 05:04:22.801191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.826 passed 00:07:03.826 Test: blob_relations3 ...passed 00:07:03.826 Test: blobstore_clean_power_failure ...passed 00:07:03.826 Test: blob_delete_snapshot_power_failure ...[2024-07-26 05:04:22.899282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:03.826 [2024-07-26 05:04:22.907345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:03.826 [2024-07-26 05:04:22.915642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:03.826 [2024-07-26 05:04:22.915721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:03.826 [2024-07-26 05:04:22.915765] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.826 [2024-07-26 05:04:22.924432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:03.826 [2024-07-26 05:04:22.924542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:03.826 [2024-07-26 05:04:22.924567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:03.826 [2024-07-26 05:04:22.924587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.826 [2024-07-26 05:04:22.933247] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:03.826 [2024-07-26 05:04:22.933384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:03.826 [2024-07-26 05:04:22.933422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:03.826 [2024-07-26 05:04:22.933456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.085 [2024-07-26 05:04:22.942421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:04.085 [2024-07-26 05:04:22.942534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.085 [2024-07-26 05:04:22.950770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:04.085 [2024-07-26 05:04:22.950899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.085 [2024-07-26 05:04:22.959460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:04.085 [2024-07-26 05:04:22.959564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.085 passed 00:07:04.085 Test: blob_create_snapshot_power_failure ...[2024-07-26 05:04:22.983548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:04.085 [2024-07-26 05:04:22.991374] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:04.085 [2024-07-26 05:04:23.006643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:04.085 [2024-07-26 05:04:23.014899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:04.085 passed 00:07:04.086 Test: blob_io_unit ...passed 00:07:04.086 Test: blob_io_unit_compatibility ...passed 00:07:04.086 Test: blob_ext_md_pages ...passed 00:07:04.086 Test: blob_esnap_io_4096_4096 ...passed 00:07:04.086 Test: blob_esnap_io_512_512 ...passed 00:07:04.086 Test: blob_esnap_io_4096_512 ...passed 00:07:04.086 Test: blob_esnap_io_512_4096 ...passed 00:07:04.086 Suite: blob_bs_nocopy_extent 00:07:04.086 Test: blob_open ...passed 00:07:04.086 Test: blob_create ...[2024-07-26 05:04:23.194576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:04.344 passed 00:07:04.344 Test: blob_create_loop ...passed 00:07:04.344 Test: blob_create_fail ...[2024-07-26 05:04:23.271427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:04.344 passed 00:07:04.344 Test: blob_create_internal ...passed 00:07:04.345 Test: blob_create_zero_extent ...passed 00:07:04.345 Test: blob_snapshot ...passed 00:07:04.345 Test: blob_clone ...passed 00:07:04.345 Test: blob_inflate ...[2024-07-26 05:04:23.380513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:04.345 passed 00:07:04.345 Test: blob_delete ...passed 00:07:04.345 Test: blob_resize_test ...[2024-07-26 05:04:23.418935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:04.345 passed 00:07:04.345 Test: channel_ops ...passed 00:07:04.602 Test: blob_super ...passed 00:07:04.602 Test: blob_rw_verify_iov ...passed 00:07:04.602 Test: blob_unmap ...passed 00:07:04.602 Test: blob_iter ...passed 00:07:04.602 Test: blob_parse_md ...passed 00:07:04.602 Test: bs_load_pending_removal ...passed 00:07:04.602 Test: bs_unload ...[2024-07-26 05:04:23.574366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:04.602 passed 00:07:04.602 Test: bs_usable_clusters ...passed 00:07:04.602 Test: blob_crc ...[2024-07-26 05:04:23.615295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:04.602 [2024-07-26 05:04:23.615422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:04.602 passed 00:07:04.602 Test: blob_flags ...passed 00:07:04.602 Test: bs_version ...passed 00:07:04.602 Test: blob_set_xattrs_test ...[2024-07-26 05:04:23.675691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:04.602 [2024-07-26 05:04:23.675798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:04.602 passed 00:07:04.861 Test: blob_thin_prov_alloc ...passed 00:07:04.861 Test: blob_insert_cluster_msg_test ...passed 00:07:04.861 Test: blob_thin_prov_rw ...passed 00:07:04.861 Test: blob_thin_prov_rle ...passed 00:07:04.861 Test: blob_thin_prov_rw_iov ...passed 00:07:04.861 Test: blob_snapshot_rw ...passed 00:07:04.861 Test: blob_snapshot_rw_iov ...passed 00:07:05.119 Test: blob_inflate_rw ...passed 00:07:05.119 Test: blob_snapshot_freeze_io ...passed 00:07:05.378 Test: blob_operation_split_rw ...passed 00:07:05.378 Test: blob_operation_split_rw_iov ...passed 00:07:05.378 Test: blob_simultaneous_operations ...[2024-07-26 05:04:24.379610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:05.378 [2024-07-26 05:04:24.379724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.378 [2024-07-26 05:04:24.380825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:05.378 [2024-07-26 05:04:24.380876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.378 [2024-07-26 05:04:24.390310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:05.378 [2024-07-26 05:04:24.390366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.378 [2024-07-26 05:04:24.390464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:05.378 [2024-07-26 05:04:24.390482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.378 passed 00:07:05.378 Test: blob_persist_test ...passed 00:07:05.378 Test: blob_decouple_snapshot ...passed 00:07:05.378 Test: blob_seek_io_unit ...passed 00:07:05.637 Test: blob_nested_freezes ...passed 00:07:05.637 Suite: blob_blob_nocopy_extent 00:07:05.637 Test: blob_write ...passed 00:07:05.637 Test: blob_read ...passed 00:07:05.637 Test: blob_rw_verify ...passed 00:07:05.637 Test: blob_rw_verify_iov_nomem ...passed 00:07:05.637 Test: blob_rw_iov_read_only ...passed 00:07:05.637 Test: blob_xattr ...passed 00:07:05.637 Test: blob_dirty_shutdown ...passed 00:07:05.637 Test: blob_is_degraded ...passed 00:07:05.637 Suite: blob_esnap_bs_nocopy_extent 00:07:05.637 Test: blob_esnap_create ...passed 00:07:05.637 Test: blob_esnap_thread_add_remove ...passed 00:07:05.637 Test: blob_esnap_clone_snapshot ...passed 00:07:05.637 Test: blob_esnap_clone_inflate ...passed 00:07:05.896 Test: blob_esnap_clone_decouple ...passed 00:07:05.896 Test: blob_esnap_clone_reload ...passed 00:07:05.896 Test: blob_esnap_hotplug ...passed 00:07:05.896 Suite: blob_copy_noextent 00:07:05.896 Test: blob_init ...[2024-07-26 05:04:24.805155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:05.896 passed 00:07:05.896 Test: blob_thin_provision ...passed 00:07:05.896 Test: blob_read_only ...passed 00:07:05.896 Test: bs_load ...[2024-07-26 05:04:24.836941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:05.896 passed 00:07:05.896 Test: bs_load_custom_cluster_size ...passed 00:07:05.896 Test: bs_load_after_failed_grow ...passed 00:07:05.896 Test: bs_cluster_sz ...[2024-07-26 05:04:24.853831] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:05.896 [2024-07-26 05:04:24.854066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:05.896 [2024-07-26 05:04:24.854108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:05.896 passed 00:07:05.896 Test: bs_resize_md ...passed 00:07:05.896 Test: bs_destroy ...passed 00:07:05.896 Test: bs_type ...passed 00:07:05.896 Test: bs_super_block ...passed 00:07:05.896 Test: bs_test_recover_cluster_count ...passed 00:07:05.896 Test: bs_grow_live ...passed 00:07:05.896 Test: bs_grow_live_no_space ...passed 00:07:05.896 Test: bs_test_grow ...passed 00:07:05.896 Test: blob_serialize_test ...passed 00:07:05.896 Test: super_block_crc ...passed 00:07:05.896 Test: blob_thin_prov_write_count_io ...passed 00:07:05.896 Test: bs_load_iter_test ...passed 00:07:05.896 Test: blob_relations ...[2024-07-26 05:04:24.954462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.896 [2024-07-26 05:04:24.954575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.896 [2024-07-26 05:04:24.955155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.896 [2024-07-26 05:04:24.955191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.896 passed 00:07:05.896 Test: blob_relations2 ...[2024-07-26 05:04:24.963963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.896 [2024-07-26 05:04:24.964052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.896 [2024-07-26 05:04:24.964076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.896 [2024-07-26 05:04:24.964088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.896 [2024-07-26 05:04:24.965027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.896 [2024-07-26 05:04:24.965107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.896 [2024-07-26 05:04:24.965428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.896 [2024-07-26 05:04:24.965465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.896 passed 00:07:05.896 Test: blob_relations3 ...passed 00:07:06.155 Test: blobstore_clean_power_failure ...passed 00:07:06.155 Test: blob_delete_snapshot_power_failure ...[2024-07-26 05:04:25.056051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:06.155 [2024-07-26 05:04:25.063535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:06.155 [2024-07-26 05:04:25.063616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:06.155 [2024-07-26 05:04:25.063652] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.155 [2024-07-26 05:04:25.071229] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:06.155 [2024-07-26 05:04:25.071313] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:06.155 [2024-07-26 05:04:25.071330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:06.155 [2024-07-26 05:04:25.071347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.155 [2024-07-26 05:04:25.078898] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:06.155 [2024-07-26 05:04:25.079012] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.155 [2024-07-26 05:04:25.086552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:06.155 [2024-07-26 05:04:25.086671] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.155 [2024-07-26 05:04:25.094369] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:06.155 [2024-07-26 05:04:25.094475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.155 passed 00:07:06.155 Test: blob_create_snapshot_power_failure ...[2024-07-26 05:04:25.116960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:06.155 [2024-07-26 05:04:25.131365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:06.155 [2024-07-26 05:04:25.138884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:06.155 passed 00:07:06.155 Test: blob_io_unit ...passed 00:07:06.155 Test: blob_io_unit_compatibility ...passed 00:07:06.155 Test: blob_ext_md_pages ...passed 00:07:06.155 Test: blob_esnap_io_4096_4096 ...passed 00:07:06.155 Test: blob_esnap_io_512_512 ...passed 00:07:06.155 Test: blob_esnap_io_4096_512 ...passed 00:07:06.414 Test: blob_esnap_io_512_4096 ...passed 00:07:06.414 Suite: blob_bs_copy_noextent 00:07:06.414 Test: blob_open ...passed 00:07:06.414 Test: blob_create ...[2024-07-26 05:04:25.308339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:06.414 passed 00:07:06.414 Test: blob_create_loop ...passed 00:07:06.414 Test: blob_create_fail ...[2024-07-26 05:04:25.374041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:06.414 passed 00:07:06.414 Test: blob_create_internal ...passed 00:07:06.414 Test: blob_create_zero_extent ...passed 00:07:06.414 Test: blob_snapshot ...passed 00:07:06.414 Test: blob_clone ...passed 00:07:06.414 Test: blob_inflate ...[2024-07-26 05:04:25.478680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:06.414 passed 00:07:06.414 Test: blob_delete ...passed 00:07:06.414 Test: blob_resize_test ...[2024-07-26 05:04:25.522433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:06.673 passed 00:07:06.673 Test: channel_ops ...passed 00:07:06.673 Test: blob_super ...passed 00:07:06.673 Test: blob_rw_verify_iov ...passed 00:07:06.673 Test: blob_unmap ...passed 00:07:06.673 Test: blob_iter ...passed 00:07:06.673 Test: blob_parse_md ...passed 00:07:06.673 Test: bs_load_pending_removal ...passed 00:07:06.673 Test: bs_unload ...[2024-07-26 05:04:25.691390] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:06.673 passed 00:07:06.673 Test: bs_usable_clusters ...passed 00:07:06.673 Test: blob_crc ...[2024-07-26 05:04:25.735873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:06.673 [2024-07-26 05:04:25.735993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:06.673 passed 00:07:06.673 Test: blob_flags ...passed 00:07:06.932 Test: bs_version ...passed 00:07:06.932 Test: blob_set_xattrs_test ...[2024-07-26 05:04:25.804115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:06.932 [2024-07-26 05:04:25.804230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:06.932 passed 00:07:06.933 Test: blob_thin_prov_alloc ...passed 00:07:06.933 Test: blob_insert_cluster_msg_test ...passed 00:07:06.933 Test: blob_thin_prov_rw ...passed 00:07:06.933 Test: blob_thin_prov_rle ...passed 00:07:06.933 Test: blob_thin_prov_rw_iov ...passed 00:07:07.191 Test: blob_snapshot_rw ...passed 00:07:07.192 Test: blob_snapshot_rw_iov ...passed 00:07:07.192 Test: blob_inflate_rw ...passed 00:07:07.192 Test: blob_snapshot_freeze_io ...passed 00:07:07.451 Test: blob_operation_split_rw ...passed 00:07:07.451 Test: blob_operation_split_rw_iov ...passed 00:07:07.451 Test: blob_simultaneous_operations ...[2024-07-26 05:04:26.549277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.451 [2024-07-26 05:04:26.549403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.451 [2024-07-26 05:04:26.549840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.451 [2024-07-26 05:04:26.549864] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.451 [2024-07-26 05:04:26.552072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.451 [2024-07-26 05:04:26.552138] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.451 [2024-07-26 05:04:26.552221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.451 [2024-07-26 05:04:26.552239] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.710 passed 00:07:07.710 Test: blob_persist_test ...passed 00:07:07.710 Test: blob_decouple_snapshot ...passed 00:07:07.710 Test: blob_seek_io_unit ...passed 00:07:07.710 Test: blob_nested_freezes ...passed 00:07:07.710 Suite: blob_blob_copy_noextent 00:07:07.710 Test: blob_write ...passed 00:07:07.710 Test: blob_read ...passed 00:07:07.710 Test: blob_rw_verify ...passed 00:07:07.710 Test: blob_rw_verify_iov_nomem ...passed 00:07:07.710 Test: blob_rw_iov_read_only ...passed 00:07:07.710 Test: blob_xattr ...passed 00:07:07.710 Test: blob_dirty_shutdown ...passed 00:07:07.710 Test: blob_is_degraded ...passed 00:07:07.710 Suite: blob_esnap_bs_copy_noextent 00:07:07.969 Test: blob_esnap_create ...passed 00:07:07.969 Test: blob_esnap_thread_add_remove ...passed 00:07:07.969 Test: blob_esnap_clone_snapshot ...passed 00:07:07.969 Test: blob_esnap_clone_inflate ...passed 00:07:07.969 Test: blob_esnap_clone_decouple ...passed 00:07:07.969 Test: blob_esnap_clone_reload ...passed 00:07:07.969 Test: blob_esnap_hotplug ...passed 00:07:07.969 Suite: blob_copy_extent 00:07:07.969 Test: blob_init ...[2024-07-26 05:04:26.950686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:07.969 passed 00:07:07.969 Test: blob_thin_provision ...passed 00:07:07.969 Test: blob_read_only ...passed 00:07:07.969 Test: bs_load ...[2024-07-26 05:04:26.982234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:07.969 passed 00:07:07.969 Test: bs_load_custom_cluster_size ...passed 00:07:07.969 Test: bs_load_after_failed_grow ...passed 00:07:07.969 Test: bs_cluster_sz ...[2024-07-26 05:04:26.999310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:07.970 [2024-07-26 05:04:26.999552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:07.970 [2024-07-26 05:04:26.999596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:07.970 passed 00:07:07.970 Test: bs_resize_md ...passed 00:07:07.970 Test: bs_destroy ...passed 00:07:07.970 Test: bs_type ...passed 00:07:07.970 Test: bs_super_block ...passed 00:07:07.970 Test: bs_test_recover_cluster_count ...passed 00:07:07.970 Test: bs_grow_live ...passed 00:07:07.970 Test: bs_grow_live_no_space ...passed 00:07:07.970 Test: bs_test_grow ...passed 00:07:07.970 Test: blob_serialize_test ...passed 00:07:07.970 Test: super_block_crc ...passed 00:07:08.229 Test: blob_thin_prov_write_count_io ...passed 00:07:08.229 Test: bs_load_iter_test ...passed 00:07:08.229 Test: blob_relations ...[2024-07-26 05:04:27.096414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.229 [2024-07-26 05:04:27.096526] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 [2024-07-26 05:04:27.097686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.229 [2024-07-26 05:04:27.097787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 passed 00:07:08.229 Test: blob_relations2 ...[2024-07-26 05:04:27.107894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.229 [2024-07-26 05:04:27.107971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 [2024-07-26 05:04:27.108025] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.229 [2024-07-26 05:04:27.108052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 [2024-07-26 05:04:27.109565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.229 [2024-07-26 05:04:27.109632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 [2024-07-26 05:04:27.110115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.229 [2024-07-26 05:04:27.110167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 passed 00:07:08.229 Test: blob_relations3 ...passed 00:07:08.229 Test: blobstore_clean_power_failure ...passed 00:07:08.229 Test: blob_delete_snapshot_power_failure ...[2024-07-26 05:04:27.204485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:08.229 [2024-07-26 05:04:27.214694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:08.229 [2024-07-26 05:04:27.222682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:08.229 [2024-07-26 05:04:27.222761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:08.229 [2024-07-26 05:04:27.222798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 [2024-07-26 05:04:27.230451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:08.229 [2024-07-26 05:04:27.230530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:08.229 [2024-07-26 05:04:27.230562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:08.229 [2024-07-26 05:04:27.230581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 [2024-07-26 05:04:27.238290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:08.229 [2024-07-26 05:04:27.238385] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:08.229 [2024-07-26 05:04:27.238424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:08.229 [2024-07-26 05:04:27.238443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 [2024-07-26 05:04:27.246132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:08.229 [2024-07-26 05:04:27.246244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 [2024-07-26 05:04:27.253879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:08.229 [2024-07-26 05:04:27.253999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 [2024-07-26 05:04:27.261787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:08.229 [2024-07-26 05:04:27.261896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.229 passed 00:07:08.229 Test: blob_create_snapshot_power_failure ...[2024-07-26 05:04:27.284066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:08.229 [2024-07-26 05:04:27.291464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:08.229 [2024-07-26 05:04:27.305725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:08.229 [2024-07-26 05:04:27.313307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:08.229 passed 00:07:08.488 Test: blob_io_unit ...passed 00:07:08.488 Test: blob_io_unit_compatibility ...passed 00:07:08.488 Test: blob_ext_md_pages ...passed 00:07:08.488 Test: blob_esnap_io_4096_4096 ...passed 00:07:08.488 Test: blob_esnap_io_512_512 ...passed 00:07:08.488 Test: blob_esnap_io_4096_512 ...passed 00:07:08.488 Test: blob_esnap_io_512_4096 ...passed 00:07:08.488 Suite: blob_bs_copy_extent 00:07:08.488 Test: blob_open ...passed 00:07:08.488 Test: blob_create ...[2024-07-26 05:04:27.463450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:08.488 passed 00:07:08.488 Test: blob_create_loop ...passed 00:07:08.489 Test: blob_create_fail ...[2024-07-26 05:04:27.533158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:08.489 passed 00:07:08.489 Test: blob_create_internal ...passed 00:07:08.489 Test: blob_create_zero_extent ...passed 00:07:08.489 Test: blob_snapshot ...passed 00:07:08.748 Test: blob_clone ...passed 00:07:08.748 Test: blob_inflate ...[2024-07-26 05:04:27.633505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:08.748 passed 00:07:08.748 Test: blob_delete ...passed 00:07:08.748 Test: blob_resize_test ...[2024-07-26 05:04:27.670990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:08.748 passed 00:07:08.748 Test: channel_ops ...passed 00:07:08.748 Test: blob_super ...passed 00:07:08.748 Test: blob_rw_verify_iov ...passed 00:07:08.748 Test: blob_unmap ...passed 00:07:08.748 Test: blob_iter ...passed 00:07:08.748 Test: blob_parse_md ...passed 00:07:08.748 Test: bs_load_pending_removal ...passed 00:07:08.748 Test: bs_unload ...[2024-07-26 05:04:27.826690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:08.748 passed 00:07:08.748 Test: bs_usable_clusters ...passed 00:07:09.007 Test: blob_crc ...[2024-07-26 05:04:27.866512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:09.007 [2024-07-26 05:04:27.866638] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:09.007 passed 00:07:09.007 Test: blob_flags ...passed 00:07:09.007 Test: bs_version ...passed 00:07:09.007 Test: blob_set_xattrs_test ...[2024-07-26 05:04:27.927060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:09.007 [2024-07-26 05:04:27.927175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:09.007 passed 00:07:09.007 Test: blob_thin_prov_alloc ...passed 00:07:09.007 Test: blob_insert_cluster_msg_test ...passed 00:07:09.007 Test: blob_thin_prov_rw ...passed 00:07:09.007 Test: blob_thin_prov_rle ...passed 00:07:09.266 Test: blob_thin_prov_rw_iov ...passed 00:07:09.266 Test: blob_snapshot_rw ...passed 00:07:09.266 Test: blob_snapshot_rw_iov ...passed 00:07:09.266 Test: blob_inflate_rw ...passed 00:07:09.266 Test: blob_snapshot_freeze_io ...passed 00:07:09.526 Test: blob_operation_split_rw ...passed 00:07:09.526 Test: blob_operation_split_rw_iov ...passed 00:07:09.526 Test: blob_simultaneous_operations ...[2024-07-26 05:04:28.613954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:09.526 [2024-07-26 05:04:28.614065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.526 [2024-07-26 05:04:28.614470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:09.526 [2024-07-26 05:04:28.614494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.526 [2024-07-26 05:04:28.616479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:09.526 [2024-07-26 05:04:28.616534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.526 [2024-07-26 05:04:28.616616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:09.526 [2024-07-26 05:04:28.616634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.526 passed 00:07:09.785 Test: blob_persist_test ...passed 00:07:09.785 Test: blob_decouple_snapshot ...passed 00:07:09.785 Test: blob_seek_io_unit ...passed 00:07:09.785 Test: blob_nested_freezes ...passed 00:07:09.785 Suite: blob_blob_copy_extent 00:07:09.785 Test: blob_write ...passed 00:07:09.785 Test: blob_read ...passed 00:07:09.785 Test: blob_rw_verify ...passed 00:07:09.785 Test: blob_rw_verify_iov_nomem ...passed 00:07:09.785 Test: blob_rw_iov_read_only ...passed 00:07:09.785 Test: blob_xattr ...passed 00:07:09.785 Test: blob_dirty_shutdown ...passed 00:07:09.785 Test: blob_is_degraded ...passed 00:07:09.785 Suite: blob_esnap_bs_copy_extent 00:07:10.044 Test: blob_esnap_create ...passed 00:07:10.044 Test: blob_esnap_thread_add_remove ...passed 00:07:10.044 Test: blob_esnap_clone_snapshot ...passed 00:07:10.044 Test: blob_esnap_clone_inflate ...passed 00:07:10.044 Test: blob_esnap_clone_decouple ...passed 00:07:10.044 Test: blob_esnap_clone_reload ...passed 00:07:10.044 Test: blob_esnap_hotplug ...passed 00:07:10.044 00:07:10.044 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.044 suites 16 16 n/a 0 0 00:07:10.044 tests 348 348 348 0 0 00:07:10.044 asserts 92605 92605 92605 0 n/a 00:07:10.044 00:07:10.044 Elapsed time = 8.602 seconds 00:07:10.044 05:04:29 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:10.044 00:07:10.044 00:07:10.044 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.044 http://cunit.sourceforge.net/ 00:07:10.044 00:07:10.044 00:07:10.044 Suite: blob_bdev 00:07:10.044 Test: create_bs_dev ...passed 00:07:10.044 Test: create_bs_dev_ro ...[2024-07-26 05:04:29.105928] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:10.044 passed 00:07:10.044 Test: create_bs_dev_rw ...passed 00:07:10.044 Test: claim_bs_dev ...passed 00:07:10.044 Test: claim_bs_dev_ro ...[2024-07-26 05:04:29.106360] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:10.044 passed 00:07:10.044 Test: deferred_destroy_refs ...passed 00:07:10.044 Test: deferred_destroy_channels ...passed 00:07:10.044 Test: deferred_destroy_threads ...passed 00:07:10.044 00:07:10.044 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.044 suites 1 1 n/a 0 0 00:07:10.044 tests 8 8 8 0 0 00:07:10.044 asserts 119 119 119 0 n/a 00:07:10.044 00:07:10.044 Elapsed time = 0.001 seconds 00:07:10.044 05:04:29 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:10.044 00:07:10.044 00:07:10.044 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.044 http://cunit.sourceforge.net/ 00:07:10.044 00:07:10.044 00:07:10.044 Suite: tree 00:07:10.044 Test: blobfs_tree_op_test ...passed 00:07:10.044 00:07:10.044 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.044 suites 1 1 n/a 0 0 00:07:10.044 tests 1 1 1 0 0 00:07:10.044 asserts 27 27 27 0 n/a 00:07:10.044 00:07:10.044 Elapsed time = 0.000 seconds 00:07:10.044 05:04:29 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:10.303 00:07:10.303 00:07:10.303 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.303 http://cunit.sourceforge.net/ 00:07:10.303 00:07:10.303 00:07:10.303 Suite: blobfs_async_ut 00:07:10.303 Test: fs_init ...passed 00:07:10.303 Test: fs_open ...passed 00:07:10.303 Test: fs_create ...passed 00:07:10.303 Test: fs_truncate ...passed 00:07:10.303 Test: fs_rename ...[2024-07-26 05:04:29.275351] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:10.303 passed 00:07:10.303 Test: fs_rw_async ...passed 00:07:10.303 Test: fs_writev_readv_async ...passed 00:07:10.303 Test: tree_find_buffer_ut ...passed 00:07:10.303 Test: channel_ops ...passed 00:07:10.303 Test: channel_ops_sync ...passed 00:07:10.303 00:07:10.303 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.303 suites 1 1 n/a 0 0 00:07:10.303 tests 10 10 10 0 0 00:07:10.303 asserts 292 292 292 0 n/a 00:07:10.303 00:07:10.303 Elapsed time = 0.148 seconds 00:07:10.303 05:04:29 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:10.303 00:07:10.303 00:07:10.303 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.303 http://cunit.sourceforge.net/ 00:07:10.303 00:07:10.303 00:07:10.303 Suite: blobfs_sync_ut 00:07:10.562 Test: cache_read_after_write ...[2024-07-26 05:04:29.437994] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:10.562 passed 00:07:10.562 Test: file_length ...passed 00:07:10.562 Test: append_write_to_extend_blob ...passed 00:07:10.562 Test: partial_buffer ...passed 00:07:10.562 Test: cache_write_null_buffer ...passed 00:07:10.562 Test: fs_create_sync ...passed 00:07:10.562 Test: fs_rename_sync ...passed 00:07:10.562 Test: cache_append_no_cache ...passed 00:07:10.562 Test: fs_delete_file_without_close ...passed 00:07:10.562 00:07:10.562 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.562 suites 1 1 n/a 0 0 00:07:10.562 tests 9 9 9 0 0 00:07:10.562 asserts 345 345 345 0 n/a 00:07:10.562 00:07:10.562 Elapsed time = 0.318 seconds 00:07:10.562 05:04:29 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:10.562 00:07:10.562 00:07:10.562 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.562 http://cunit.sourceforge.net/ 00:07:10.563 00:07:10.563 00:07:10.563 Suite: blobfs_bdev_ut 00:07:10.563 Test: spdk_blobfs_bdev_detect_test ...[2024-07-26 05:04:29.612248] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:10.563 passed 00:07:10.563 Test: spdk_blobfs_bdev_create_test ...passed 00:07:10.563 Test: spdk_blobfs_bdev_mount_test ...passed[2024-07-26 05:04:29.612886] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:10.563 00:07:10.563 00:07:10.563 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.563 suites 1 1 n/a 0 0 00:07:10.563 tests 3 3 3 0 0 00:07:10.563 asserts 9 9 9 0 n/a 00:07:10.563 00:07:10.563 Elapsed time = 0.001 seconds 00:07:10.563 00:07:10.563 real 0m9.227s 00:07:10.563 user 0m8.735s 00:07:10.563 sys 0m0.662s 00:07:10.563 05:04:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.563 05:04:29 -- common/autotest_common.sh@10 -- # set +x 00:07:10.563 ************************************ 00:07:10.563 END TEST unittest_blob_blobfs 00:07:10.563 ************************************ 00:07:10.563 05:04:29 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:07:10.563 05:04:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:10.563 05:04:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.563 05:04:29 -- common/autotest_common.sh@10 -- # set +x 00:07:10.823 ************************************ 00:07:10.823 START TEST unittest_event 00:07:10.823 ************************************ 00:07:10.823 05:04:29 -- common/autotest_common.sh@1104 -- # unittest_event 00:07:10.823 05:04:29 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:10.823 00:07:10.823 00:07:10.823 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.823 http://cunit.sourceforge.net/ 00:07:10.823 00:07:10.823 00:07:10.823 Suite: app_suite 00:07:10.823 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:10.823 options: 00:07:10.823 -c, --config JSON config file (default none) 00:07:10.823 --json JSON config file (default none) 00:07:10.823 --json-ignore-init-errors 00:07:10.823 don't exit on invalid config entry 00:07:10.823 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:10.823 -g, --single-file-segments 00:07:10.823 force creating just one hugetlbfs file 00:07:10.823 -h, --help show this usage 00:07:10.823 -i, --shm-id shared memory ID (optional) 00:07:10.823 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:10.823 --lcores lcore to CPU mapping list. The list is in the format: 00:07:10.823 app_ut: invalid option -- 'z' 00:07:10.823 [<,lcores[@CPUs]>...] 00:07:10.823 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:10.823 Within the group, '-' is used for range separator, 00:07:10.823 ',' is used for single number separator. 00:07:10.823 '( )' can be omitted for single element group, 00:07:10.823 '@' can be omitted if cpus and lcores have the same value 00:07:10.823 -n, --mem-channels channel number of memory channels used for DPDK 00:07:10.823 -p, --main-core main (primary) core for DPDK 00:07:10.823 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:10.823 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:10.823 --disable-cpumask-locks Disable CPU core lock files. 00:07:10.823 --silence-noticelog disable notice level logging to stderr 00:07:10.823 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:10.823 -u, --no-pci disable PCI access 00:07:10.823 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:10.823 --max-delay maximum reactor delay (in microseconds) 00:07:10.823 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:10.823 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:10.823 -R, --huge-unlink unlink huge files after initialization 00:07:10.823 -v, --version print SPDK version 00:07:10.823 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:10.823 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:10.823 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:10.823 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:10.823 Tracepoints vary in size and can use more than one trace entry. 00:07:10.823 --rpcs-allowed comma-separated list of permitted RPCS 00:07:10.823 --env-context Opaque context for use of the env implementation 00:07:10.823 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:10.823 --no-huge run without using hugepages 00:07:10.823 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:10.823 -e, --tpoint-group [:] 00:07:10.823 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:10.823 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:10.823 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:10.823 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:10.823 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:10.823 app_ut [options] 00:07:10.823 options: 00:07:10.823 -c, --config JSON config file (default none) 00:07:10.823 --json JSON config file (default none) 00:07:10.823 --json-ignore-init-errors 00:07:10.823 don't exit on invalid config entry 00:07:10.823 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:10.823 -g, --single-file-segments 00:07:10.823 force creating just one hugetlbfs file 00:07:10.823 -h, --help show this usage 00:07:10.823 -i, --shm-id shared memory ID (optional) 00:07:10.823 app_ut: unrecognized option '--test-long-opt' 00:07:10.823 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:10.823 --lcores lcore to CPU mapping list. The list is in the format: 00:07:10.823 [<,lcores[@CPUs]>...] 00:07:10.823 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:10.823 Within the group, '-' is used for range separator, 00:07:10.823 ',' is used for single number separator. 00:07:10.823 '( )' can be omitted for single element group, 00:07:10.823 '@' can be omitted if cpus and lcores have the same value 00:07:10.823 -n, --mem-channels channel number of memory channels used for DPDK 00:07:10.824 -p, --main-core main (primary) core for DPDK 00:07:10.824 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:10.824 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:10.824 --disable-cpumask-locks Disable CPU core lock files. 00:07:10.824 --silence-noticelog disable notice level logging to stderr 00:07:10.824 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:10.824 -u, --no-pci disable PCI access 00:07:10.824 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:10.824 --max-delay maximum reactor delay (in microseconds) 00:07:10.824 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:10.824 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:10.824 -R, --huge-unlink unlink huge files after initialization 00:07:10.824 -v, --version print SPDK version 00:07:10.824 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:10.824 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:10.824 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:10.824 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:10.824 Tracepoints vary in size and can use more than one trace entry. 00:07:10.824 --rpcs-allowed comma-separated list of permitted RPCS 00:07:10.824 --env-context Opaque context for use of the env implementation 00:07:10.824 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:10.824 --no-huge run without using hugepages 00:07:10.824 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:10.824 -e, --tpoint-group [:] 00:07:10.824 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:10.824 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:10.824 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:10.824 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:10.824 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:10.824 app_ut [options] 00:07:10.824 options: 00:07:10.824 -c, --config JSON config file (default none) 00:07:10.824 --json JSON config file (default none) 00:07:10.824 --json-ignore-init-errors 00:07:10.824 don't exit on invalid config entry 00:07:10.824 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:10.824 -g, --single-file-segments 00:07:10.824 force creating just one hugetlbfs file 00:07:10.824 -h, --help show this usage 00:07:10.824 -i, --shm-id shared memory ID (optional) 00:07:10.824 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:10.824 --lcores lcore to CPU mapping list. The list is in the format: 00:07:10.824 [<,lcores[@CPUs]>...] 00:07:10.824 [2024-07-26 05:04:29.696601] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:10.824 [2024-07-26 05:04:29.696861] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:10.824 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:10.824 Within the group, '-' is used for range separator, 00:07:10.824 ',' is used for single number separator. 00:07:10.824 '( )' can be omitted for single element group, 00:07:10.824 '@' can be omitted if cpus and lcores have the same value 00:07:10.824 -n, --mem-channels channel number of memory channels used for DPDK 00:07:10.824 -p, --main-core main (primary) core for DPDK 00:07:10.824 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:10.824 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:10.824 --disable-cpumask-locks Disable CPU core lock files. 00:07:10.824 --silence-noticelog disable notice level logging to stderr 00:07:10.824 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:10.824 -u, --no-pci disable PCI access 00:07:10.824 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:10.824 --max-delay maximum reactor delay (in microseconds) 00:07:10.824 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:10.824 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:10.824 -R, --huge-unlink unlink huge files after initialization 00:07:10.824 -v, --version print SPDK version 00:07:10.824 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:10.824 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:10.824 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:10.824 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:10.824 Tracepoints vary in size and can use more than one trace entry. 00:07:10.824 --rpcs-allowed comma-separated list of permitted RPCS 00:07:10.824 --env-context Opaque context for use of the env implementation 00:07:10.824 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:10.824 --no-huge run without using hugepages 00:07:10.824 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:10.824 -e, --tpoint-group [:] 00:07:10.824 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:10.824 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:10.824 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:10.824 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:10.824 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:10.824 passed 00:07:10.824 00:07:10.824 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.824 suites 1 1 n/a 0 0 00:07:10.824 tests 1 1 1 0 0 00:07:10.824 asserts 8 8 8 0 n/a 00:07:10.824 00:07:10.824 Elapsed time = 0.001 seconds 00:07:10.824 [2024-07-26 05:04:29.697073] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:10.824 05:04:29 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:10.824 00:07:10.824 00:07:10.824 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.824 http://cunit.sourceforge.net/ 00:07:10.824 00:07:10.824 00:07:10.824 Suite: app_suite 00:07:10.824 Test: test_create_reactor ...passed 00:07:10.824 Test: test_init_reactors ...passed 00:07:10.824 Test: test_event_call ...passed 00:07:10.824 Test: test_schedule_thread ...passed 00:07:10.824 Test: test_reschedule_thread ...passed 00:07:10.824 Test: test_bind_thread ...passed 00:07:10.824 Test: test_for_each_reactor ...passed 00:07:10.824 Test: test_reactor_stats ...passed 00:07:10.824 Test: test_scheduler ...passed 00:07:10.824 Test: test_governor ...passed 00:07:10.824 00:07:10.824 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.824 suites 1 1 n/a 0 0 00:07:10.824 tests 10 10 10 0 0 00:07:10.824 asserts 344 344 344 0 n/a 00:07:10.824 00:07:10.824 Elapsed time = 0.018 seconds 00:07:10.824 00:07:10.824 real 0m0.089s 00:07:10.824 user 0m0.048s 00:07:10.824 sys 0m0.041s 00:07:10.824 05:04:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.824 05:04:29 -- common/autotest_common.sh@10 -- # set +x 00:07:10.824 ************************************ 00:07:10.824 END TEST unittest_event 00:07:10.824 ************************************ 00:07:10.824 05:04:29 -- unit/unittest.sh@233 -- # uname -s 00:07:10.824 05:04:29 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:07:10.824 05:04:29 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:07:10.824 05:04:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:10.824 05:04:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.824 05:04:29 -- common/autotest_common.sh@10 -- # set +x 00:07:10.824 ************************************ 00:07:10.824 START TEST unittest_ftl 00:07:10.824 ************************************ 00:07:10.824 05:04:29 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:07:10.824 05:04:29 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:10.824 00:07:10.824 00:07:10.824 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.824 http://cunit.sourceforge.net/ 00:07:10.824 00:07:10.824 00:07:10.824 Suite: ftl_band_suite 00:07:10.824 Test: test_band_block_offset_from_addr_base ...passed 00:07:10.824 Test: test_band_block_offset_from_addr_offset ...passed 00:07:11.084 Test: test_band_addr_from_block_offset ...passed 00:07:11.084 Test: test_band_set_addr ...passed 00:07:11.084 Test: test_invalidate_addr ...passed 00:07:11.084 Test: test_next_xfer_addr ...passed 00:07:11.084 00:07:11.084 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.084 suites 1 1 n/a 0 0 00:07:11.084 tests 6 6 6 0 0 00:07:11.084 asserts 30356 30356 30356 0 n/a 00:07:11.084 00:07:11.084 Elapsed time = 0.196 seconds 00:07:11.084 05:04:30 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:11.084 00:07:11.084 00:07:11.084 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.084 http://cunit.sourceforge.net/ 00:07:11.084 00:07:11.084 00:07:11.084 Suite: ftl_bitmap 00:07:11.084 Test: test_ftl_bitmap_create ...passed 00:07:11.084 Test: test_ftl_bitmap_get ...[2024-07-26 05:04:30.121154] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:11.084 [2024-07-26 05:04:30.121380] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:11.084 passed 00:07:11.084 Test: test_ftl_bitmap_set ...passed 00:07:11.084 Test: test_ftl_bitmap_clear ...passed 00:07:11.084 Test: test_ftl_bitmap_find_first_set ...passed 00:07:11.084 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:11.084 Test: test_ftl_bitmap_count_set ...passed 00:07:11.084 00:07:11.084 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.084 suites 1 1 n/a 0 0 00:07:11.084 tests 7 7 7 0 0 00:07:11.084 asserts 137 137 137 0 n/a 00:07:11.084 00:07:11.084 Elapsed time = 0.002 seconds 00:07:11.084 05:04:30 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:11.084 00:07:11.084 00:07:11.084 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.084 http://cunit.sourceforge.net/ 00:07:11.084 00:07:11.084 00:07:11.084 Suite: ftl_io_suite 00:07:11.084 Test: test_completion ...passed 00:07:11.084 Test: test_multiple_ios ...passed 00:07:11.084 00:07:11.084 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.084 suites 1 1 n/a 0 0 00:07:11.084 tests 2 2 2 0 0 00:07:11.084 asserts 47 47 47 0 n/a 00:07:11.084 00:07:11.084 Elapsed time = 0.004 seconds 00:07:11.084 05:04:30 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:11.084 00:07:11.084 00:07:11.084 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.084 http://cunit.sourceforge.net/ 00:07:11.084 00:07:11.084 00:07:11.084 Suite: ftl_mngt 00:07:11.084 Test: test_next_step ...passed 00:07:11.084 Test: test_continue_step ...passed 00:07:11.084 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:11.084 Test: test_fail_step ...passed 00:07:11.084 Test: test_mngt_call_and_call_rollback ...passed 00:07:11.084 Test: test_nested_process_failure ...passed 00:07:11.084 00:07:11.084 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.084 suites 1 1 n/a 0 0 00:07:11.084 tests 6 6 6 0 0 00:07:11.084 asserts 176 176 176 0 n/a 00:07:11.084 00:07:11.084 Elapsed time = 0.002 seconds 00:07:11.344 05:04:30 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:11.344 00:07:11.344 00:07:11.344 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.344 http://cunit.sourceforge.net/ 00:07:11.344 00:07:11.344 00:07:11.344 Suite: ftl_mempool 00:07:11.344 Test: test_ftl_mempool_create ...passed 00:07:11.344 Test: test_ftl_mempool_get_put ...passed 00:07:11.344 00:07:11.344 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.344 suites 1 1 n/a 0 0 00:07:11.344 tests 2 2 2 0 0 00:07:11.344 asserts 36 36 36 0 n/a 00:07:11.344 00:07:11.344 Elapsed time = 0.000 seconds 00:07:11.344 05:04:30 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:11.344 00:07:11.344 00:07:11.344 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.344 http://cunit.sourceforge.net/ 00:07:11.344 00:07:11.344 00:07:11.344 Suite: ftl_addr64_suite 00:07:11.344 Test: test_addr_cached ...passed 00:07:11.344 00:07:11.344 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.344 suites 1 1 n/a 0 0 00:07:11.344 tests 1 1 1 0 0 00:07:11.344 asserts 1536 1536 1536 0 n/a 00:07:11.344 00:07:11.344 Elapsed time = 0.000 seconds 00:07:11.344 05:04:30 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:11.344 00:07:11.344 00:07:11.344 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.344 http://cunit.sourceforge.net/ 00:07:11.344 00:07:11.344 00:07:11.344 Suite: ftl_sb 00:07:11.344 Test: test_sb_crc_v2 ...passed 00:07:11.344 Test: test_sb_crc_v3 ...passed 00:07:11.344 Test: test_sb_v3_md_layout ...[2024-07-26 05:04:30.276539] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:11.344 [2024-07-26 05:04:30.276785] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:11.344 [2024-07-26 05:04:30.276830] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:11.344 [2024-07-26 05:04:30.276850] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:11.344 [2024-07-26 05:04:30.276872] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:11.344 [2024-07-26 05:04:30.276895] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:11.344 [2024-07-26 05:04:30.276920] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:11.344 [2024-07-26 05:04:30.276941] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:11.344 passed 00:07:11.344 Test: test_sb_v5_md_layout ...[2024-07-26 05:04:30.277001] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:11.344 [2024-07-26 05:04:30.277070] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:11.344 [2024-07-26 05:04:30.277100] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:11.344 passed 00:07:11.344 00:07:11.344 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.344 suites 1 1 n/a 0 0 00:07:11.344 tests 4 4 4 0 0 00:07:11.344 asserts 148 148 148 0 n/a 00:07:11.344 00:07:11.344 Elapsed time = 0.002 seconds 00:07:11.344 05:04:30 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:11.344 00:07:11.344 00:07:11.344 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.344 http://cunit.sourceforge.net/ 00:07:11.344 00:07:11.344 00:07:11.344 Suite: ftl_layout_upgrade 00:07:11.344 Test: test_l2p_upgrade ...passed 00:07:11.344 00:07:11.344 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.344 suites 1 1 n/a 0 0 00:07:11.344 tests 1 1 1 0 0 00:07:11.344 asserts 140 140 140 0 n/a 00:07:11.344 00:07:11.344 Elapsed time = 0.001 seconds 00:07:11.344 00:07:11.344 real 0m0.509s 00:07:11.344 user 0m0.212s 00:07:11.344 sys 0m0.297s 00:07:11.344 05:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.344 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.344 ************************************ 00:07:11.344 END TEST unittest_ftl 00:07:11.344 ************************************ 00:07:11.344 05:04:30 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:11.344 05:04:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.344 05:04:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.344 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.344 ************************************ 00:07:11.344 START TEST unittest_accel 00:07:11.344 ************************************ 00:07:11.344 05:04:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:11.344 00:07:11.344 00:07:11.344 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.344 http://cunit.sourceforge.net/ 00:07:11.344 00:07:11.344 00:07:11.344 Suite: accel_sequence 00:07:11.344 Test: test_sequence_fill_copy ...passed 00:07:11.344 Test: test_sequence_abort ...passed 00:07:11.344 Test: test_sequence_append_error ...passed 00:07:11.344 Test: test_sequence_completion_error ...[2024-07-26 05:04:30.411410] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7377e43287c0 00:07:11.345 [2024-07-26 05:04:30.411711] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7377e43287c0 00:07:11.345 passed 00:07:11.345 Test: test_sequence_decompress ...[2024-07-26 05:04:30.411763] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7377e43287c0 00:07:11.345 [2024-07-26 05:04:30.411800] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7377e43287c0 00:07:11.345 passed 00:07:11.345 Test: test_sequence_reverse ...passed 00:07:11.345 Test: test_sequence_copy_elision ...passed 00:07:11.345 Test: test_sequence_accel_buffers ...passed 00:07:11.345 Test: test_sequence_memory_domain ...[2024-07-26 05:04:30.424369] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:11.345 [2024-07-26 05:04:30.424573] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:11.345 passed 00:07:11.345 Test: test_sequence_module_memory_domain ...passed 00:07:11.345 Test: test_sequence_crypto ...passed 00:07:11.345 Test: test_sequence_driver ...[2024-07-26 05:04:30.432029] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7377e15aa7c0 using driver: ut 00:07:11.345 [2024-07-26 05:04:30.432117] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7377e15aa7c0 through driver: ut 00:07:11.345 passed 00:07:11.345 Test: test_sequence_same_iovs ...passed 00:07:11.345 Test: test_sequence_crc32 ...passed 00:07:11.345 Suite: accel 00:07:11.345 Test: test_spdk_accel_task_complete ...passed 00:07:11.345 Test: test_get_task ...passed 00:07:11.345 Test: test_spdk_accel_submit_copy ...passed 00:07:11.345 Test: test_spdk_accel_submit_dualcast ...passed 00:07:11.345 Test: test_spdk_accel_submit_compare ...passed 00:07:11.345 Test: test_spdk_accel_submit_fill ...[2024-07-26 05:04:30.437562] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:11.345 [2024-07-26 05:04:30.437638] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:11.345 passed 00:07:11.345 Test: test_spdk_accel_submit_crc32c ...passed 00:07:11.345 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:11.345 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:11.345 Test: test_spdk_accel_submit_xor ...passed 00:07:11.345 Test: test_spdk_accel_module_find_by_name ...passed 00:07:11.345 Test: test_spdk_accel_module_register ...passed 00:07:11.345 00:07:11.345 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.345 suites 2 2 n/a 0 0 00:07:11.345 tests 26 26 26 0 0 00:07:11.345 asserts 831 831 831 0 n/a 00:07:11.345 00:07:11.345 Elapsed time = 0.039 seconds 00:07:11.604 00:07:11.604 real 0m0.082s 00:07:11.604 user 0m0.044s 00:07:11.604 sys 0m0.038s 00:07:11.604 05:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.605 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.605 ************************************ 00:07:11.605 END TEST unittest_accel 00:07:11.605 ************************************ 00:07:11.605 05:04:30 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:11.605 05:04:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.605 05:04:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.605 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.605 ************************************ 00:07:11.605 START TEST unittest_ioat 00:07:11.605 ************************************ 00:07:11.605 05:04:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:11.605 00:07:11.605 00:07:11.605 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.605 http://cunit.sourceforge.net/ 00:07:11.605 00:07:11.605 00:07:11.605 Suite: ioat 00:07:11.605 Test: ioat_state_check ...passed 00:07:11.605 00:07:11.605 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.605 suites 1 1 n/a 0 0 00:07:11.605 tests 1 1 1 0 0 00:07:11.605 asserts 32 32 32 0 n/a 00:07:11.605 00:07:11.605 Elapsed time = 0.000 seconds 00:07:11.605 00:07:11.605 real 0m0.027s 00:07:11.605 user 0m0.014s 00:07:11.605 sys 0m0.013s 00:07:11.605 05:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.605 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.605 ************************************ 00:07:11.605 END TEST unittest_ioat 00:07:11.605 ************************************ 00:07:11.605 05:04:30 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:11.605 05:04:30 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:11.605 05:04:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.605 05:04:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.605 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.605 ************************************ 00:07:11.605 START TEST unittest_idxd_user 00:07:11.605 ************************************ 00:07:11.605 05:04:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:11.605 00:07:11.605 00:07:11.605 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.605 http://cunit.sourceforge.net/ 00:07:11.605 00:07:11.605 00:07:11.605 Suite: idxd_user 00:07:11.605 Test: test_idxd_wait_cmd ...[2024-07-26 05:04:30.616760] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:11.605 passed 00:07:11.605 Test: test_idxd_reset_dev ...passed 00:07:11.605 Test: test_idxd_group_config ...[2024-07-26 05:04:30.616921] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:11.605 [2024-07-26 05:04:30.616993] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:11.605 [2024-07-26 05:04:30.617047] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:11.605 passed 00:07:11.605 Test: test_idxd_wq_config ...passed 00:07:11.605 00:07:11.605 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.605 suites 1 1 n/a 0 0 00:07:11.605 tests 4 4 4 0 0 00:07:11.605 asserts 20 20 20 0 n/a 00:07:11.605 00:07:11.605 Elapsed time = 0.001 seconds 00:07:11.605 00:07:11.605 real 0m0.029s 00:07:11.605 user 0m0.009s 00:07:11.605 sys 0m0.020s 00:07:11.605 05:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.605 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.605 ************************************ 00:07:11.605 END TEST unittest_idxd_user 00:07:11.605 ************************************ 00:07:11.605 05:04:30 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:07:11.605 05:04:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.605 05:04:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.605 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.605 ************************************ 00:07:11.605 START TEST unittest_iscsi 00:07:11.605 ************************************ 00:07:11.605 05:04:30 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:07:11.605 05:04:30 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:11.605 00:07:11.605 00:07:11.605 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.605 http://cunit.sourceforge.net/ 00:07:11.605 00:07:11.605 00:07:11.605 Suite: conn_suite 00:07:11.605 Test: read_task_split_in_order_case ...passed 00:07:11.605 Test: read_task_split_reverse_order_case ...passed 00:07:11.605 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:11.605 Test: process_non_read_task_completion_test ...passed 00:07:11.605 Test: free_tasks_on_connection ...passed 00:07:11.605 Test: free_tasks_with_queued_datain ...passed 00:07:11.605 Test: abort_queued_datain_task_test ...passed 00:07:11.605 Test: abort_queued_datain_tasks_test ...passed 00:07:11.605 00:07:11.605 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.605 suites 1 1 n/a 0 0 00:07:11.605 tests 8 8 8 0 0 00:07:11.605 asserts 230 230 230 0 n/a 00:07:11.605 00:07:11.605 Elapsed time = 0.000 seconds 00:07:11.865 05:04:30 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:11.865 00:07:11.865 00:07:11.865 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.865 http://cunit.sourceforge.net/ 00:07:11.865 00:07:11.865 00:07:11.865 Suite: iscsi_suite 00:07:11.865 Test: param_negotiation_test ...passed 00:07:11.865 Test: list_negotiation_test ...passed 00:07:11.865 Test: parse_valid_test ...passed 00:07:11.865 Test: parse_invalid_test ...[2024-07-26 05:04:30.757198] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:11.865 [2024-07-26 05:04:30.757703] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:11.865 [2024-07-26 05:04:30.757797] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:07:11.865 [2024-07-26 05:04:30.757889] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:11.865 [2024-07-26 05:04:30.758121] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:11.865 [2024-07-26 05:04:30.758222] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:11.865 [2024-07-26 05:04:30.758424] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:11.865 passed 00:07:11.865 00:07:11.865 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.865 suites 1 1 n/a 0 0 00:07:11.865 tests 4 4 4 0 0 00:07:11.865 asserts 161 161 161 0 n/a 00:07:11.865 00:07:11.865 Elapsed time = 0.009 seconds 00:07:11.865 05:04:30 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:11.865 00:07:11.865 00:07:11.865 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.865 http://cunit.sourceforge.net/ 00:07:11.865 00:07:11.865 00:07:11.865 Suite: iscsi_target_node_suite 00:07:11.865 Test: add_lun_test_cases ...[2024-07-26 05:04:30.793850] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:11.865 [2024-07-26 05:04:30.794088] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:11.865 [2024-07-26 05:04:30.794124] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:11.865 [2024-07-26 05:04:30.794155] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:11.865 passed 00:07:11.865 Test: allow_any_allowed ...passed 00:07:11.865 Test: allow_ipv6_allowed ...passed 00:07:11.865 Test: allow_ipv6_denied ...passed 00:07:11.865 Test: allow_ipv6_invalid ...passed 00:07:11.865 Test: allow_ipv4_allowed ...passed 00:07:11.865 Test: allow_ipv4_denied ...passed 00:07:11.865 Test: allow_ipv4_invalid ...passed 00:07:11.865 Test: node_access_allowed ...[2024-07-26 05:04:30.794200] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:11.865 passed 00:07:11.865 Test: node_access_denied_by_empty_netmask ...passed 00:07:11.865 Test: node_access_multi_initiator_groups_cases ...passed 00:07:11.865 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:11.865 Test: chap_param_test_cases ...[2024-07-26 05:04:30.794818] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:11.865 [2024-07-26 05:04:30.794863] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:11.865 [2024-07-26 05:04:30.794892] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:11.865 [2024-07-26 05:04:30.794962] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:11.865 passed 00:07:11.865 00:07:11.865 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.865 suites 1 1 n/a 0 0 00:07:11.865 tests 13 13 13 0 0 00:07:11.865 asserts 50 50 50 0 n/a 00:07:11.865 00:07:11.865 Elapsed time = 0.001 seconds 00:07:11.865 [2024-07-26 05:04:30.795034] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:11.865 05:04:30 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:11.865 00:07:11.865 00:07:11.865 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.865 http://cunit.sourceforge.net/ 00:07:11.865 00:07:11.865 00:07:11.865 Suite: iscsi_suite 00:07:11.865 Test: op_login_check_target_test ...passed 00:07:11.865 Test: op_login_session_normal_test ...[2024-07-26 05:04:30.832601] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:07:11.865 [2024-07-26 05:04:30.832919] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:11.865 [2024-07-26 05:04:30.832962] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:11.865 [2024-07-26 05:04:30.832991] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:11.865 [2024-07-26 05:04:30.833070] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:11.865 [2024-07-26 05:04:30.833114] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:11.865 [2024-07-26 05:04:30.833181] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:11.865 [2024-07-26 05:04:30.833213] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:11.865 passed 00:07:11.865 Test: maxburstlength_test ...[2024-07-26 05:04:30.833465] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:11.865 passed 00:07:11.865 Test: underflow_for_read_transfer_test ...passed 00:07:11.865 Test: underflow_for_zero_read_transfer_test ...[2024-07-26 05:04:30.833521] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:11.865 passed 00:07:11.865 Test: underflow_for_request_sense_test ...passed 00:07:11.865 Test: underflow_for_check_condition_test ...passed 00:07:11.865 Test: add_transfer_task_test ...passed 00:07:11.865 Test: get_transfer_task_test ...passed 00:07:11.865 Test: del_transfer_task_test ...passed 00:07:11.865 Test: clear_all_transfer_tasks_test ...passed 00:07:11.865 Test: build_iovs_test ...passed 00:07:11.865 Test: build_iovs_with_md_test ...passed 00:07:11.865 Test: pdu_hdr_op_login_test ...[2024-07-26 05:04:30.835191] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:11.865 [2024-07-26 05:04:30.835299] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:11.865 [2024-07-26 05:04:30.835375] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:11.865 passed 00:07:11.865 Test: pdu_hdr_op_text_test ...[2024-07-26 05:04:30.835470] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:11.865 [2024-07-26 05:04:30.835545] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:11.865 [2024-07-26 05:04:30.835580] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:11.865 passed 00:07:11.865 Test: pdu_hdr_op_logout_test ...passed 00:07:11.865 Test: pdu_hdr_op_scsi_test ...[2024-07-26 05:04:30.835665] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:11.865 [2024-07-26 05:04:30.835786] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:11.865 [2024-07-26 05:04:30.835819] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:11.866 [2024-07-26 05:04:30.835859] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:11.866 [2024-07-26 05:04:30.835943] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:11.866 [2024-07-26 05:04:30.836045] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:11.866 passed 00:07:11.866 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-26 05:04:30.836216] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:11.866 [2024-07-26 05:04:30.836304] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:11.866 [2024-07-26 05:04:30.836409] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:11.866 passed 00:07:11.866 Test: pdu_hdr_op_nopout_test ...[2024-07-26 05:04:30.836595] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:11.866 [2024-07-26 05:04:30.836665] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:11.866 [2024-07-26 05:04:30.836704] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:11.866 [2024-07-26 05:04:30.836731] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:11.866 passed 00:07:11.866 Test: pdu_hdr_op_data_test ...[2024-07-26 05:04:30.836788] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:11.866 [2024-07-26 05:04:30.836846] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:11.866 [2024-07-26 05:04:30.836905] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:11.866 [2024-07-26 05:04:30.836937] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:11.866 [2024-07-26 05:04:30.836993] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:11.866 [2024-07-26 05:04:30.837085] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:11.866 passed 00:07:11.866 Test: empty_text_with_cbit_test ...passed 00:07:11.866 Test: pdu_payload_read_test ...[2024-07-26 05:04:30.837118] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:11.866 [2024-07-26 05:04:30.839290] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:11.866 passed 00:07:11.866 Test: data_out_pdu_sequence_test ...passed 00:07:11.866 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:11.866 00:07:11.866 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.866 suites 1 1 n/a 0 0 00:07:11.866 tests 24 24 24 0 0 00:07:11.866 asserts 150253 150253 150253 0 n/a 00:07:11.866 00:07:11.866 Elapsed time = 0.017 seconds 00:07:11.866 05:04:30 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:11.866 00:07:11.866 00:07:11.866 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.866 http://cunit.sourceforge.net/ 00:07:11.866 00:07:11.866 00:07:11.866 Suite: init_grp_suite 00:07:11.866 Test: create_initiator_group_success_case ...passed 00:07:11.866 Test: find_initiator_group_success_case ...passed 00:07:11.866 Test: register_initiator_group_twice_case ...passed 00:07:11.866 Test: add_initiator_name_success_case ...passed 00:07:11.866 Test: add_initiator_name_fail_case ...[2024-07-26 05:04:30.884471] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:11.866 passed 00:07:11.866 Test: delete_all_initiator_names_success_case ...passed 00:07:11.866 Test: add_netmask_success_case ...passed 00:07:11.866 Test: add_netmask_fail_case ...[2024-07-26 05:04:30.884969] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:11.866 passed 00:07:11.866 Test: delete_all_netmasks_success_case ...passed 00:07:11.866 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:11.866 Test: netmask_overwrite_all_to_any_case ...passed 00:07:11.866 Test: add_delete_initiator_names_case ...passed 00:07:11.866 Test: add_duplicated_initiator_names_case ...passed 00:07:11.866 Test: delete_nonexisting_initiator_names_case ...passed 00:07:11.866 Test: add_delete_netmasks_case ...passed 00:07:11.866 Test: add_duplicated_netmasks_case ...passed 00:07:11.866 Test: delete_nonexisting_netmasks_case ...passed 00:07:11.866 00:07:11.866 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.866 suites 1 1 n/a 0 0 00:07:11.866 tests 17 17 17 0 0 00:07:11.866 asserts 108 108 108 0 n/a 00:07:11.866 00:07:11.866 Elapsed time = 0.002 seconds 00:07:11.866 05:04:30 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:11.866 00:07:11.866 00:07:11.866 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.866 http://cunit.sourceforge.net/ 00:07:11.866 00:07:11.866 00:07:11.866 Suite: portal_grp_suite 00:07:11.866 Test: portal_create_ipv4_normal_case ...passed 00:07:11.866 Test: portal_create_ipv6_normal_case ...passed 00:07:11.866 Test: portal_create_ipv4_wildcard_case ...passed 00:07:11.866 Test: portal_create_ipv6_wildcard_case ...passed 00:07:11.866 Test: portal_create_twice_case ...passed 00:07:11.866 Test: portal_grp_register_unregister_case ...passed 00:07:11.866 Test: portal_grp_register_twice_case ...passed 00:07:11.866 Test: portal_grp_add_delete_case ...[2024-07-26 05:04:30.919814] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:11.866 passed 00:07:11.866 Test: portal_grp_add_delete_twice_case ...passed 00:07:11.866 00:07:11.866 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.866 suites 1 1 n/a 0 0 00:07:11.866 tests 9 9 9 0 0 00:07:11.866 asserts 44 44 44 0 n/a 00:07:11.866 00:07:11.866 Elapsed time = 0.004 seconds 00:07:11.866 00:07:11.866 real 0m0.251s 00:07:11.866 user 0m0.121s 00:07:11.866 sys 0m0.132s 00:07:11.866 05:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.866 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.866 ************************************ 00:07:11.866 END TEST unittest_iscsi 00:07:11.866 ************************************ 00:07:12.126 05:04:30 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:07:12.126 05:04:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.126 05:04:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.126 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:07:12.126 ************************************ 00:07:12.126 START TEST unittest_json 00:07:12.126 ************************************ 00:07:12.126 05:04:30 -- common/autotest_common.sh@1104 -- # unittest_json 00:07:12.126 05:04:30 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:12.126 00:07:12.126 00:07:12.126 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.126 http://cunit.sourceforge.net/ 00:07:12.126 00:07:12.126 00:07:12.126 Suite: json 00:07:12.126 Test: test_parse_literal ...passed 00:07:12.126 Test: test_parse_string_simple ...passed 00:07:12.126 Test: test_parse_string_control_chars ...passed 00:07:12.126 Test: test_parse_string_utf8 ...passed 00:07:12.126 Test: test_parse_string_escapes_twochar ...passed 00:07:12.126 Test: test_parse_string_escapes_unicode ...passed 00:07:12.126 Test: test_parse_number ...passed 00:07:12.126 Test: test_parse_array ...passed 00:07:12.126 Test: test_parse_object ...passed 00:07:12.126 Test: test_parse_nesting ...passed 00:07:12.126 Test: test_parse_comment ...passed 00:07:12.126 00:07:12.126 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.126 suites 1 1 n/a 0 0 00:07:12.126 tests 11 11 11 0 0 00:07:12.126 asserts 1516 1516 1516 0 n/a 00:07:12.126 00:07:12.126 Elapsed time = 0.002 seconds 00:07:12.126 05:04:31 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:12.126 00:07:12.126 00:07:12.126 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.126 http://cunit.sourceforge.net/ 00:07:12.126 00:07:12.126 00:07:12.126 Suite: json 00:07:12.126 Test: test_strequal ...passed 00:07:12.126 Test: test_num_to_uint16 ...passed 00:07:12.126 Test: test_num_to_int32 ...passed 00:07:12.126 Test: test_num_to_uint64 ...passed 00:07:12.126 Test: test_decode_object ...passed 00:07:12.126 Test: test_decode_array ...passed 00:07:12.127 Test: test_decode_bool ...passed 00:07:12.127 Test: test_decode_uint16 ...passed 00:07:12.127 Test: test_decode_int32 ...passed 00:07:12.127 Test: test_decode_uint32 ...passed 00:07:12.127 Test: test_decode_uint64 ...passed 00:07:12.127 Test: test_decode_string ...passed 00:07:12.127 Test: test_decode_uuid ...passed 00:07:12.127 Test: test_find ...passed 00:07:12.127 Test: test_find_array ...passed 00:07:12.127 Test: test_iterating ...passed 00:07:12.127 Test: test_free_object ...passed 00:07:12.127 00:07:12.127 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.127 suites 1 1 n/a 0 0 00:07:12.127 tests 17 17 17 0 0 00:07:12.127 asserts 236 236 236 0 n/a 00:07:12.127 00:07:12.127 Elapsed time = 0.001 seconds 00:07:12.127 05:04:31 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:12.127 00:07:12.127 00:07:12.127 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.127 http://cunit.sourceforge.net/ 00:07:12.127 00:07:12.127 00:07:12.127 Suite: json 00:07:12.127 Test: test_write_literal ...passed 00:07:12.127 Test: test_write_string_simple ...passed 00:07:12.127 Test: test_write_string_escapes ...passed 00:07:12.127 Test: test_write_string_utf16le ...passed 00:07:12.127 Test: test_write_number_int32 ...passed 00:07:12.127 Test: test_write_number_uint32 ...passed 00:07:12.127 Test: test_write_number_uint128 ...passed 00:07:12.127 Test: test_write_string_number_uint128 ...passed 00:07:12.127 Test: test_write_number_int64 ...passed 00:07:12.127 Test: test_write_number_uint64 ...passed 00:07:12.127 Test: test_write_number_double ...passed 00:07:12.127 Test: test_write_uuid ...passed 00:07:12.127 Test: test_write_array ...passed 00:07:12.127 Test: test_write_object ...passed 00:07:12.127 Test: test_write_nesting ...passed 00:07:12.127 Test: test_write_val ...passed 00:07:12.127 00:07:12.127 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.127 suites 1 1 n/a 0 0 00:07:12.127 tests 16 16 16 0 0 00:07:12.127 asserts 918 918 918 0 n/a 00:07:12.127 00:07:12.127 Elapsed time = 0.004 seconds 00:07:12.127 05:04:31 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:12.127 00:07:12.127 00:07:12.127 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.127 http://cunit.sourceforge.net/ 00:07:12.127 00:07:12.127 00:07:12.127 Suite: jsonrpc 00:07:12.127 Test: test_parse_request ...passed 00:07:12.127 Test: test_parse_request_streaming ...passed 00:07:12.127 00:07:12.127 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.127 suites 1 1 n/a 0 0 00:07:12.127 tests 2 2 2 0 0 00:07:12.127 asserts 289 289 289 0 n/a 00:07:12.127 00:07:12.127 Elapsed time = 0.004 seconds 00:07:12.127 00:07:12.127 real 0m0.133s 00:07:12.127 user 0m0.073s 00:07:12.127 sys 0m0.062s 00:07:12.127 05:04:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.127 05:04:31 -- common/autotest_common.sh@10 -- # set +x 00:07:12.127 ************************************ 00:07:12.127 END TEST unittest_json 00:07:12.127 ************************************ 00:07:12.127 05:04:31 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:07:12.127 05:04:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.127 05:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.127 05:04:31 -- common/autotest_common.sh@10 -- # set +x 00:07:12.127 ************************************ 00:07:12.127 START TEST unittest_rpc 00:07:12.127 ************************************ 00:07:12.127 05:04:31 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:07:12.127 05:04:31 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:12.127 00:07:12.127 00:07:12.127 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.127 http://cunit.sourceforge.net/ 00:07:12.127 00:07:12.127 00:07:12.127 Suite: rpc 00:07:12.127 Test: test_jsonrpc_handler ...passed 00:07:12.127 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:12.127 Test: test_rpc_get_methods ...passed 00:07:12.127 Test: test_rpc_spdk_get_version ...passed 00:07:12.127 Test: test_spdk_rpc_listen_close ...passed 00:07:12.127 00:07:12.127 [2024-07-26 05:04:31.193943] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:12.127 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.127 suites 1 1 n/a 0 0 00:07:12.127 tests 5 5 5 0 0 00:07:12.127 asserts 20 20 20 0 n/a 00:07:12.127 00:07:12.127 Elapsed time = 0.000 seconds 00:07:12.127 00:07:12.127 real 0m0.031s 00:07:12.127 user 0m0.015s 00:07:12.127 sys 0m0.016s 00:07:12.127 05:04:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.127 05:04:31 -- common/autotest_common.sh@10 -- # set +x 00:07:12.127 ************************************ 00:07:12.127 END TEST unittest_rpc 00:07:12.127 ************************************ 00:07:12.386 05:04:31 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:12.386 05:04:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.386 05:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.386 05:04:31 -- common/autotest_common.sh@10 -- # set +x 00:07:12.386 ************************************ 00:07:12.386 START TEST unittest_notify 00:07:12.387 ************************************ 00:07:12.387 05:04:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:12.387 00:07:12.387 00:07:12.387 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.387 http://cunit.sourceforge.net/ 00:07:12.387 00:07:12.387 00:07:12.387 Suite: app_suite 00:07:12.387 Test: notify ...passed 00:07:12.387 00:07:12.387 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.387 suites 1 1 n/a 0 0 00:07:12.387 tests 1 1 1 0 0 00:07:12.387 asserts 13 13 13 0 n/a 00:07:12.387 00:07:12.387 Elapsed time = 0.000 seconds 00:07:12.387 00:07:12.387 real 0m0.032s 00:07:12.387 user 0m0.018s 00:07:12.387 sys 0m0.014s 00:07:12.387 05:04:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.387 05:04:31 -- common/autotest_common.sh@10 -- # set +x 00:07:12.387 ************************************ 00:07:12.387 END TEST unittest_notify 00:07:12.387 ************************************ 00:07:12.387 05:04:31 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:07:12.387 05:04:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.387 05:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.387 05:04:31 -- common/autotest_common.sh@10 -- # set +x 00:07:12.387 ************************************ 00:07:12.387 START TEST unittest_nvme 00:07:12.387 ************************************ 00:07:12.387 05:04:31 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:07:12.387 05:04:31 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:12.387 00:07:12.387 00:07:12.387 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.387 http://cunit.sourceforge.net/ 00:07:12.387 00:07:12.387 00:07:12.387 Suite: nvme 00:07:12.387 Test: test_opc_data_transfer ...passed 00:07:12.387 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:12.387 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:12.387 Test: test_trid_parse_and_compare ...[2024-07-26 05:04:31.368790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:12.387 [2024-07-26 05:04:31.369019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:12.387 [2024-07-26 05:04:31.369066] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:12.387 [2024-07-26 05:04:31.369089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:12.387 [2024-07-26 05:04:31.369130] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:07:12.387 [2024-07-26 05:04:31.369159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:12.387 passed 00:07:12.387 Test: test_trid_trtype_str ...passed 00:07:12.387 Test: test_trid_adrfam_str ...passed 00:07:12.387 Test: test_nvme_ctrlr_probe ...[2024-07-26 05:04:31.369456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:12.387 passed 00:07:12.387 Test: test_spdk_nvme_probe ...[2024-07-26 05:04:31.369552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:12.387 [2024-07-26 05:04:31.369580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:12.387 passed 00:07:12.387 Test: test_spdk_nvme_connect ...[2024-07-26 05:04:31.369671] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:12.387 [2024-07-26 05:04:31.369713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:12.387 [2024-07-26 05:04:31.369786] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:12.387 passed 00:07:12.387 Test: test_nvme_ctrlr_probe_internal ...[2024-07-26 05:04:31.370209] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:12.387 [2024-07-26 05:04:31.370258] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:07:12.387 passed 00:07:12.387 Test: test_nvme_init_controllers ...[2024-07-26 05:04:31.370413] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:12.387 [2024-07-26 05:04:31.370448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:12.387 [2024-07-26 05:04:31.370523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:12.387 passed 00:07:12.387 Test: test_nvme_driver_init ...[2024-07-26 05:04:31.370618] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:12.387 [2024-07-26 05:04:31.370654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:12.387 [2024-07-26 05:04:31.484965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:12.387 [2024-07-26 05:04:31.485156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:12.387 passed 00:07:12.387 Test: test_spdk_nvme_detach ...passed 00:07:12.387 Test: test_nvme_completion_poll_cb ...passed 00:07:12.387 Test: test_nvme_user_copy_cmd_complete ...passed 00:07:12.387 Test: test_nvme_allocate_request_null ...passed 00:07:12.387 Test: test_nvme_allocate_request ...passed 00:07:12.387 Test: test_nvme_free_request ...passed 00:07:12.387 Test: test_nvme_allocate_request_user_copy ...passed 00:07:12.387 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:12.387 Test: test_nvme_request_check_timeout ...passed 00:07:12.387 Test: test_nvme_wait_for_completion ...passed 00:07:12.387 Test: test_spdk_nvme_parse_func ...passed 00:07:12.387 Test: test_spdk_nvme_detach_async ...passed 00:07:12.387 Test: test_nvme_parse_addr ...[2024-07-26 05:04:31.486251] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:12.387 passed 00:07:12.387 00:07:12.387 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.387 suites 1 1 n/a 0 0 00:07:12.387 tests 25 25 25 0 0 00:07:12.387 asserts 326 326 326 0 n/a 00:07:12.387 00:07:12.387 Elapsed time = 0.007 seconds 00:07:12.646 05:04:31 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:12.646 00:07:12.646 00:07:12.646 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.646 http://cunit.sourceforge.net/ 00:07:12.646 00:07:12.646 00:07:12.646 Suite: nvme_ctrlr 00:07:12.646 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-26 05:04:31.522412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.646 passed 00:07:12.646 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-26 05:04:31.524126] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.646 passed 00:07:12.646 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-26 05:04:31.525428] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.646 passed 00:07:12.646 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-26 05:04:31.526780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.646 passed 00:07:12.647 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-26 05:04:31.528130] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.647 [2024-07-26 05:04:31.529306] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-26 05:04:31.530533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-26 05:04:31.531756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:12.647 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-26 05:04:31.534269] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.647 [2024-07-26 05:04:31.536648] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-26 05:04:31.537863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:12.647 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-26 05:04:31.540375] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.647 [2024-07-26 05:04:31.541620] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-26 05:04:31.544058] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:12.647 Test: test_nvme_ctrlr_init_delay ...[2024-07-26 05:04:31.546667] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.647 passed 00:07:12.647 Test: test_alloc_io_qpair_rr_1 ...[2024-07-26 05:04:31.548095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.647 [2024-07-26 05:04:31.548333] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:12.647 [2024-07-26 05:04:31.548441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:12.647 passed 00:07:12.647 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:07:12.647 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:12.647 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-26 05:04:31.548503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:12.647 [2024-07-26 05:04:31.548537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:12.647 [2024-07-26 05:04:31.548703] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.647 passed 00:07:12.647 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-26 05:04:31.548881] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.647 [2024-07-26 05:04:31.549053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:12.647 passed 00:07:12.647 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-26 05:04:31.549300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:12.647 [2024-07-26 05:04:31.549411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:12.647 passed 00:07:12.647 Test: test_nvme_ctrlr_fail ...[2024-07-26 05:04:31.549509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:12.647 [2024-07-26 05:04:31.549588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:12.647 passed 00:07:12.647 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:12.647 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:12.647 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...[2024-07-26 05:04:31.549644] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:12.647 passed 00:07:12.647 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-26 05:04:31.549943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:12.907 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:12.907 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:12.907 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-26 05:04:31.891168] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-26 05:04:31.898499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-26 05:04:31.899806] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 [2024-07-26 05:04:31.899869] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:12.907 passed 00:07:12.907 Test: test_alloc_io_qpair_fail ...[2024-07-26 05:04:31.901090] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:12.907 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:12.907 Test: test_nvme_ctrlr_set_state ...[2024-07-26 05:04:31.901196] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:12.907 [2024-07-26 05:04:31.901375] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-26 05:04:31.901443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-26 05:04:31.923926] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-26 05:04:31.959466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_reset ...[2024-07-26 05:04:31.960983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_aer_callback ...[2024-07-26 05:04:31.961389] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-26 05:04:31.962836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:12.907 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:12.907 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-26 05:04:31.964486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:12.907 Test: test_nvme_ctrlr_ana_resize ...[2024-07-26 05:04:31.965861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:12.907 Test: test_nvme_transport_ctrlr_ready ...passed 00:07:12.907 Test: test_nvme_ctrlr_disable ...[2024-07-26 05:04:31.967383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:12.907 [2024-07-26 05:04:31.967425] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:07:12.907 [2024-07-26 05:04:31.967461] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:12.907 passed 00:07:12.907 00:07:12.907 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.907 suites 1 1 n/a 0 0 00:07:12.907 tests 43 43 43 0 0 00:07:12.907 asserts 10418 10418 10418 0 n/a 00:07:12.907 00:07:12.907 Elapsed time = 0.405 seconds 00:07:12.907 05:04:31 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:12.907 00:07:12.907 00:07:12.907 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.907 http://cunit.sourceforge.net/ 00:07:12.907 00:07:12.907 00:07:12.907 Suite: nvme_ctrlr_cmd 00:07:12.907 Test: test_get_log_pages ...passed 00:07:12.907 Test: test_set_feature_cmd ...passed 00:07:12.907 Test: test_set_feature_ns_cmd ...passed 00:07:12.907 Test: test_get_feature_cmd ...passed 00:07:12.907 Test: test_get_feature_ns_cmd ...passed 00:07:12.907 Test: test_abort_cmd ...passed 00:07:12.907 Test: test_set_host_id_cmds ...[2024-07-26 05:04:32.015772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:13.167 passed 00:07:13.167 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:13.167 Test: test_io_raw_cmd ...passed 00:07:13.167 Test: test_io_raw_cmd_with_md ...passed 00:07:13.167 Test: test_namespace_attach ...passed 00:07:13.167 Test: test_namespace_detach ...passed 00:07:13.167 Test: test_namespace_create ...passed 00:07:13.167 Test: test_namespace_delete ...passed 00:07:13.167 Test: test_doorbell_buffer_config ...passed 00:07:13.167 Test: test_format_nvme ...passed 00:07:13.167 Test: test_fw_commit ...passed 00:07:13.167 Test: test_fw_image_download ...passed 00:07:13.167 Test: test_sanitize ...passed 00:07:13.167 Test: test_directive ...passed 00:07:13.167 Test: test_nvme_request_add_abort ...passed 00:07:13.167 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:13.167 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:13.167 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:13.167 00:07:13.167 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.167 suites 1 1 n/a 0 0 00:07:13.167 tests 24 24 24 0 0 00:07:13.167 asserts 198 198 198 0 n/a 00:07:13.167 00:07:13.167 Elapsed time = 0.001 seconds 00:07:13.167 05:04:32 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:13.167 00:07:13.167 00:07:13.167 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.167 http://cunit.sourceforge.net/ 00:07:13.167 00:07:13.167 00:07:13.167 Suite: nvme_ctrlr_cmd 00:07:13.167 Test: test_geometry_cmd ...passed 00:07:13.167 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:13.167 00:07:13.167 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.167 suites 1 1 n/a 0 0 00:07:13.167 tests 2 2 2 0 0 00:07:13.167 asserts 7 7 7 0 n/a 00:07:13.167 00:07:13.167 Elapsed time = 0.000 seconds 00:07:13.167 05:04:32 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:13.167 00:07:13.167 00:07:13.167 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.167 http://cunit.sourceforge.net/ 00:07:13.167 00:07:13.167 00:07:13.167 Suite: nvme 00:07:13.167 Test: test_nvme_ns_construct ...passed 00:07:13.167 Test: test_nvme_ns_uuid ...passed 00:07:13.167 Test: test_nvme_ns_csi ...passed 00:07:13.167 Test: test_nvme_ns_data ...passed 00:07:13.167 Test: test_nvme_ns_set_identify_data ...passed 00:07:13.167 Test: test_spdk_nvme_ns_get_values ...passed 00:07:13.167 Test: test_spdk_nvme_ns_is_active ...passed 00:07:13.167 Test: spdk_nvme_ns_supports ...passed 00:07:13.167 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:13.167 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:13.167 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:13.167 Test: test_nvme_ns_find_id_desc ...passed 00:07:13.167 00:07:13.167 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.167 suites 1 1 n/a 0 0 00:07:13.167 tests 12 12 12 0 0 00:07:13.167 asserts 83 83 83 0 n/a 00:07:13.167 00:07:13.167 Elapsed time = 0.001 seconds 00:07:13.167 05:04:32 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:13.167 00:07:13.167 00:07:13.167 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.167 http://cunit.sourceforge.net/ 00:07:13.167 00:07:13.167 00:07:13.167 Suite: nvme_ns_cmd 00:07:13.167 Test: split_test ...passed 00:07:13.167 Test: split_test2 ...passed 00:07:13.167 Test: split_test3 ...passed 00:07:13.167 Test: split_test4 ...passed 00:07:13.167 Test: test_nvme_ns_cmd_flush ...passed 00:07:13.167 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:13.167 Test: test_nvme_ns_cmd_copy ...passed 00:07:13.167 Test: test_io_flags ...passed 00:07:13.167 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:13.167 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:13.167 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:13.167 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:13.167 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:13.167 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:13.167 Test: test_cmd_child_request ...[2024-07-26 05:04:32.116870] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:13.167 passed 00:07:13.167 Test: test_nvme_ns_cmd_readv ...passed 00:07:13.167 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:13.167 Test: test_nvme_ns_cmd_writev ...passed 00:07:13.167 Test: test_nvme_ns_cmd_write_with_md ...passed 00:07:13.167 Test: test_nvme_ns_cmd_zone_append_with_md ...[2024-07-26 05:04:32.118452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:13.167 passed 00:07:13.167 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:13.167 Test: test_nvme_ns_cmd_comparev ...passed 00:07:13.167 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:13.167 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:13.167 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:13.167 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:13.167 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:13.167 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:07:13.167 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:07:13.167 Test: test_nvme_ns_cmd_verify ...passed 00:07:13.167 Test: test_nvme_ns_cmd_io_mgmt_send ...[2024-07-26 05:04:32.120374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:13.167 [2024-07-26 05:04:32.120507] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:13.167 passed 00:07:13.167 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:13.167 00:07:13.167 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.167 suites 1 1 n/a 0 0 00:07:13.167 tests 32 32 32 0 0 00:07:13.167 asserts 550 550 550 0 n/a 00:07:13.167 00:07:13.167 Elapsed time = 0.005 seconds 00:07:13.167 05:04:32 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:13.167 00:07:13.167 00:07:13.167 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.167 http://cunit.sourceforge.net/ 00:07:13.167 00:07:13.167 00:07:13.167 Suite: nvme_ns_cmd 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:13.167 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:13.167 00:07:13.168 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.168 suites 1 1 n/a 0 0 00:07:13.168 tests 12 12 12 0 0 00:07:13.168 asserts 123 123 123 0 n/a 00:07:13.168 00:07:13.168 Elapsed time = 0.002 seconds 00:07:13.168 05:04:32 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:13.168 00:07:13.168 00:07:13.168 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.168 http://cunit.sourceforge.net/ 00:07:13.168 00:07:13.168 00:07:13.168 Suite: nvme_qpair 00:07:13.168 Test: test3 ...passed 00:07:13.168 Test: test_ctrlr_failed ...passed 00:07:13.168 Test: struct_packing ...passed 00:07:13.168 Test: test_nvme_qpair_process_completions ...[2024-07-26 05:04:32.179074] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:13.168 [2024-07-26 05:04:32.179390] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:13.168 passed 00:07:13.168 Test: test_nvme_completion_is_retry ...passed 00:07:13.168 Test: test_get_status_string ...passed 00:07:13.168 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-07-26 05:04:32.179475] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:13.168 [2024-07-26 05:04:32.179515] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:13.168 passed 00:07:13.168 Test: test_nvme_qpair_submit_request ...passed 00:07:13.168 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:13.168 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:13.168 Test: test_nvme_qpair_init_deinit ...passed 00:07:13.168 Test: test_nvme_get_sgl_print_info ...passed 00:07:13.168 00:07:13.168 [2024-07-26 05:04:32.180034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:13.168 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.168 suites 1 1 n/a 0 0 00:07:13.168 tests 12 12 12 0 0 00:07:13.168 asserts 154 154 154 0 n/a 00:07:13.168 00:07:13.168 Elapsed time = 0.001 seconds 00:07:13.168 05:04:32 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:13.168 00:07:13.168 00:07:13.168 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.168 http://cunit.sourceforge.net/ 00:07:13.168 00:07:13.168 00:07:13.168 Suite: nvme_pcie 00:07:13.168 Test: test_prp_list_append ...passed 00:07:13.168 Test: test_nvme_pcie_hotplug_monitor ...[2024-07-26 05:04:32.213446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:13.168 [2024-07-26 05:04:32.213709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:13.168 [2024-07-26 05:04:32.213759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:13.168 [2024-07-26 05:04:32.213968] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:13.168 [2024-07-26 05:04:32.214097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:13.168 passed 00:07:13.168 Test: test_shadow_doorbell_update ...passed 00:07:13.168 Test: test_build_contig_hw_sgl_request ...passed 00:07:13.168 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:13.168 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:13.168 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:13.168 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:07:13.168 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:13.168 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...[2024-07-26 05:04:32.214445] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:13.168 passed 00:07:13.168 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:07:13.168 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:07:13.168 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:07:13.168 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:07:13.168 00:07:13.168 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.168 suites 1 1 n/a 0 0 00:07:13.168 tests 14 14 14 0 0 00:07:13.168 asserts 235 235 235 0 n/a 00:07:13.168 00:07:13.168 Elapsed time = 0.002 seconds 00:07:13.168 [2024-07-26 05:04:32.214632] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:13.168 [2024-07-26 05:04:32.214724] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:13.168 [2024-07-26 05:04:32.214796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:13.168 [2024-07-26 05:04:32.214851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:13.168 05:04:32 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:13.168 00:07:13.168 00:07:13.168 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.168 http://cunit.sourceforge.net/ 00:07:13.168 00:07:13.168 00:07:13.168 Suite: nvme_ns_cmd 00:07:13.168 Test: nvme_poll_group_create_test ...passed 00:07:13.168 Test: nvme_poll_group_add_remove_test ...passed 00:07:13.168 Test: nvme_poll_group_process_completions ...passed 00:07:13.168 Test: nvme_poll_group_destroy_test ...passed 00:07:13.168 Test: nvme_poll_group_get_free_stats ...passed 00:07:13.168 00:07:13.168 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.168 suites 1 1 n/a 0 0 00:07:13.168 tests 5 5 5 0 0 00:07:13.168 asserts 75 75 75 0 n/a 00:07:13.168 00:07:13.168 Elapsed time = 0.000 seconds 00:07:13.168 05:04:32 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:13.428 00:07:13.428 00:07:13.428 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.428 http://cunit.sourceforge.net/ 00:07:13.428 00:07:13.428 00:07:13.428 Suite: nvme_quirks 00:07:13.428 Test: test_nvme_quirks_striping ...passed 00:07:13.428 00:07:13.428 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.428 suites 1 1 n/a 0 0 00:07:13.428 tests 1 1 1 0 0 00:07:13.428 asserts 5 5 5 0 n/a 00:07:13.428 00:07:13.428 Elapsed time = 0.000 seconds 00:07:13.428 05:04:32 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:13.428 00:07:13.428 00:07:13.428 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.428 http://cunit.sourceforge.net/ 00:07:13.428 00:07:13.428 00:07:13.428 Suite: nvme_tcp 00:07:13.428 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:13.428 Test: test_nvme_tcp_build_iovs ...passed 00:07:13.428 Test: test_nvme_tcp_build_sgl_request ...passed 00:07:13.428 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:13.428 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:13.428 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:13.428 Test: test_nvme_tcp_req_get ...passed 00:07:13.428 Test: test_nvme_tcp_req_init ...passed 00:07:13.428 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:13.428 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:13.428 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-26 05:04:32.305552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x721c34a0d2e0, and the iovcnt=16, remaining_size=28672 00:07:13.428 [2024-07-26 05:04:32.306140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c34509030 is same with the state(6) to be set 00:07:13.428 passed 00:07:13.428 Test: test_nvme_tcp_alloc_reqs ...passed 00:07:13.428 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:07:13.428 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-26 05:04:32.306537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c34909070 is same with the state(5) to be set 00:07:13.428 [2024-07-26 05:04:32.306612] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x721c3480a6e0 00:07:13.428 [2024-07-26 05:04:32.306650] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:13.428 [2024-07-26 05:04:32.306685] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c3480a070 is same with the state(5) to be set 00:07:13.428 [2024-07-26 05:04:32.306718] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:13.428 [2024-07-26 05:04:32.306753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c3480a070 is same with the state(5) to be set 00:07:13.428 [2024-07-26 05:04:32.306789] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:13.428 [2024-07-26 05:04:32.306827] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c3480a070 is same with the state(5) to be set 00:07:13.428 [2024-07-26 05:04:32.306861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c3480a070 is same with the state(5) to be set 00:07:13.428 [2024-07-26 05:04:32.306900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c3480a070 is same with the state(5) to be set 00:07:13.428 [2024-07-26 05:04:32.306935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c3480a070 is same with the state(5) to be set 00:07:13.428 [2024-07-26 05:04:32.306974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c3480a070 is same with the state(5) to be set 00:07:13.428 passed 00:07:13.428 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-26 05:04:32.307043] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c3480a070 is same with the state(5) to be set 00:07:13.428 passed 00:07:13.428 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:07:13.428 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-26 05:04:32.307282] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:13.428 [2024-07-26 05:04:32.307331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:13.428 [2024-07-26 05:04:32.307666] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:13.428 passed 00:07:13.428 Test: test_nvme_tcp_icresp_handle ...[2024-07-26 05:04:32.307783] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x721c3480b540): PDU Sequence Error 00:07:13.428 [2024-07-26 05:04:32.307875] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:13.428 [2024-07-26 05:04:32.307916] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:13.428 [2024-07-26 05:04:32.307956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c3490d070 is same with the state(5) to be set 00:07:13.428 [2024-07-26 05:04:32.307988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:13.428 passed 00:07:13.428 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:07:13.428 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:07:13.428 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-07-26 05:04:32.308042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c3490d070 is same with the state(5) to be set 00:07:13.428 [2024-07-26 05:04:32.308076] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c3490d070 is same with the state(0) to be set 00:07:13.428 [2024-07-26 05:04:32.308136] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x721c3480c540): PDU Sequence Error 00:07:13.428 [2024-07-26 05:04:32.308226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x721c3490f200 00:07:13.428 passed 00:07:13.428 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:07:13.428 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-26 05:04:32.308446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x721c34a25480, errno=0, rc=0 00:07:13.428 [2024-07-26 05:04:32.308489] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c34a25480 is same with the state(5) to be set 00:07:13.428 [2024-07-26 05:04:32.308527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x721c34a25480 is same with the state(5) to be set 00:07:13.428 [2024-07-26 05:04:32.308590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x721c34a25480 (0): Success 00:07:13.429 [2024-07-26 05:04:32.308634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x721c34a25480 (0): Success 00:07:13.429 passed 00:07:13.429 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...[2024-07-26 05:04:32.419063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:13.429 [2024-07-26 05:04:32.419163] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:13.429 passed 00:07:13.429 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:07:13.429 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-26 05:04:32.419474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:13.429 [2024-07-26 05:04:32.419528] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:13.429 [2024-07-26 05:04:32.419765] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:13.429 [2024-07-26 05:04:32.419801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:13.429 [2024-07-26 05:04:32.419896] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:13.429 [2024-07-26 05:04:32.419951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:13.429 [2024-07-26 05:04:32.420127] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x513000001540 with addr=192.168.1.78, port=23 00:07:13.429 [2024-07-26 05:04:32.420208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:13.429 passed 00:07:13.429 Test: test_nvme_tcp_qpair_submit_request ...passed 00:07:13.429 00:07:13.429 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.429 suites 1 1 n/a 0 0 00:07:13.429 tests 27 27 27 0 0 00:07:13.429 asserts 624 624 624 0 n/a 00:07:13.429 00:07:13.429 Elapsed time = 0.115 seconds 00:07:13.429 [2024-07-26 05:04:32.420386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x513000001a80, and the iovcnt=1, remaining_size=1024 00:07:13.429 [2024-07-26 05:04:32.420438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:13.429 05:04:32 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:13.429 00:07:13.429 00:07:13.429 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.429 http://cunit.sourceforge.net/ 00:07:13.429 00:07:13.429 00:07:13.429 Suite: nvme_transport 00:07:13.429 Test: test_nvme_get_transport ...passed 00:07:13.429 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:13.429 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:13.429 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:13.429 Test: test_ctrlr_get_memory_domains ...passed 00:07:13.429 00:07:13.429 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.429 suites 1 1 n/a 0 0 00:07:13.429 tests 5 5 5 0 0 00:07:13.429 asserts 28 28 28 0 n/a 00:07:13.429 00:07:13.429 Elapsed time = 0.000 seconds 00:07:13.429 05:04:32 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:13.429 00:07:13.429 00:07:13.429 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.429 http://cunit.sourceforge.net/ 00:07:13.429 00:07:13.429 00:07:13.429 Suite: nvme_io_msg 00:07:13.429 Test: test_nvme_io_msg_send ...passed 00:07:13.429 Test: test_nvme_io_msg_process ...passed 00:07:13.429 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:13.429 00:07:13.429 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.429 suites 1 1 n/a 0 0 00:07:13.429 tests 3 3 3 0 0 00:07:13.429 asserts 56 56 56 0 n/a 00:07:13.429 00:07:13.429 Elapsed time = 0.000 seconds 00:07:13.429 05:04:32 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:13.429 00:07:13.429 00:07:13.429 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.429 http://cunit.sourceforge.net/ 00:07:13.429 00:07:13.429 00:07:13.429 Suite: nvme_pcie_common 00:07:13.429 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:07:13.429 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:07:13.429 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...[2024-07-26 05:04:32.519838] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:13.429 passed 00:07:13.429 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:07:13.429 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-26 05:04:32.520571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:13.429 [2024-07-26 05:04:32.520627] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:13.429 [2024-07-26 05:04:32.520666] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:13.429 passed 00:07:13.429 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:07:13.429 00:07:13.429 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.429 suites 1 1 n/a 0 0 00:07:13.429 tests 6 6 6 0 0 00:07:13.429 asserts 148 148 148 0 n/a 00:07:13.429 00:07:13.429 Elapsed time = 0.001 seconds 00:07:13.429 [2024-07-26 05:04:32.521092] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:13.429 [2024-07-26 05:04:32.521141] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:13.429 05:04:32 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:13.688 00:07:13.688 00:07:13.688 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.688 http://cunit.sourceforge.net/ 00:07:13.688 00:07:13.688 00:07:13.688 Suite: nvme_fabric 00:07:13.688 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:13.688 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:13.688 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:13.688 Test: test_nvme_fabric_discover_probe ...passed 00:07:13.688 Test: test_nvme_fabric_qpair_connect ...[2024-07-26 05:04:32.549517] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:13.688 passed 00:07:13.688 00:07:13.688 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.688 suites 1 1 n/a 0 0 00:07:13.688 tests 5 5 5 0 0 00:07:13.688 asserts 60 60 60 0 n/a 00:07:13.688 00:07:13.688 Elapsed time = 0.001 seconds 00:07:13.688 05:04:32 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:13.688 00:07:13.688 00:07:13.688 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.688 http://cunit.sourceforge.net/ 00:07:13.688 00:07:13.688 00:07:13.688 Suite: nvme_opal 00:07:13.688 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:13.688 Test: test_opal_add_short_atom_header ...passed 00:07:13.688 00:07:13.688 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.688 suites 1 1 n/a 0 0 00:07:13.688 tests 2 2 2 0 0 00:07:13.688 asserts 22 22 22 0 n/a 00:07:13.688 00:07:13.688 Elapsed time = 0.000 seconds 00:07:13.688 [2024-07-26 05:04:32.572434] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:13.688 00:07:13.688 real 0m1.234s 00:07:13.688 user 0m0.638s 00:07:13.688 sys 0m0.446s 00:07:13.688 ************************************ 00:07:13.688 05:04:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.688 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:13.688 END TEST unittest_nvme 00:07:13.688 ************************************ 00:07:13.688 05:04:32 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:13.688 05:04:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.688 05:04:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.689 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:13.689 ************************************ 00:07:13.689 START TEST unittest_log 00:07:13.689 ************************************ 00:07:13.689 05:04:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:13.689 00:07:13.689 00:07:13.689 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.689 http://cunit.sourceforge.net/ 00:07:13.689 00:07:13.689 00:07:13.689 Suite: log 00:07:13.689 Test: log_test ...[2024-07-26 05:04:32.646525] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:07:13.689 [2024-07-26 05:04:32.646708] log_ut.c: 55:log_test: *DEBUG*: log test 00:07:13.689 log dump test: 00:07:13.689 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:13.689 spdk dump test: 00:07:13.689 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:13.689 spdk dump test: 00:07:13.689 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:13.689 00000010 65 20 63 68 61 72 73 e chars 00:07:13.689 passed 00:07:14.651 Test: deprecation ...passed 00:07:14.651 00:07:14.651 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.651 suites 1 1 n/a 0 0 00:07:14.651 tests 2 2 2 0 0 00:07:14.651 asserts 73 73 73 0 n/a 00:07:14.651 00:07:14.651 Elapsed time = 0.001 seconds 00:07:14.651 00:07:14.651 real 0m1.025s 00:07:14.651 user 0m0.013s 00:07:14.651 sys 0m0.013s 00:07:14.651 05:04:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.651 ************************************ 00:07:14.651 END TEST unittest_log 00:07:14.651 ************************************ 00:07:14.651 05:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:14.651 05:04:33 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:14.651 05:04:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.651 05:04:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.651 05:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:14.651 ************************************ 00:07:14.651 START TEST unittest_lvol 00:07:14.651 ************************************ 00:07:14.651 05:04:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:14.651 00:07:14.651 00:07:14.651 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.651 http://cunit.sourceforge.net/ 00:07:14.651 00:07:14.651 00:07:14.651 Suite: lvol 00:07:14.651 Test: lvs_init_unload_success ...[2024-07-26 05:04:33.733905] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:07:14.651 passed 00:07:14.651 Test: lvs_init_destroy_success ...passed 00:07:14.651 Test: lvs_init_opts_success ...passed 00:07:14.651 Test: lvs_unload_lvs_is_null_fail ...passed 00:07:14.651 Test: lvs_names ...[2024-07-26 05:04:33.734560] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:07:14.651 [2024-07-26 05:04:33.734749] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:07:14.651 [2024-07-26 05:04:33.734809] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:07:14.651 [2024-07-26 05:04:33.734853] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:07:14.651 [2024-07-26 05:04:33.734983] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:07:14.651 passed 00:07:14.651 Test: lvol_create_destroy_success ...passed 00:07:14.651 Test: lvol_create_fail ...[2024-07-26 05:04:33.735530] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:07:14.651 passed 00:07:14.651 Test: lvol_destroy_fail ...[2024-07-26 05:04:33.735653] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:07:14.651 passed 00:07:14.651 Test: lvol_close ...passed 00:07:14.651 Test: lvol_resize ...[2024-07-26 05:04:33.735917] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:07:14.651 [2024-07-26 05:04:33.736116] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:07:14.651 [2024-07-26 05:04:33.736157] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:07:14.651 passed 00:07:14.651 Test: lvol_set_read_only ...passed 00:07:14.651 Test: test_lvs_load ...passed[2024-07-26 05:04:33.736898] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:07:14.651 [2024-07-26 05:04:33.736956] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:07:14.651 00:07:14.651 Test: lvols_load ...[2024-07-26 05:04:33.737170] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:14.651 [2024-07-26 05:04:33.737307] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:14.651 passed 00:07:14.651 Test: lvol_open ...passed 00:07:14.651 Test: lvol_snapshot ...passed 00:07:14.651 Test: lvol_snapshot_fail ...passed 00:07:14.651 Test: lvol_clone ...[2024-07-26 05:04:33.737945] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:07:14.651 passed 00:07:14.651 Test: lvol_clone_fail ...passed 00:07:14.651 Test: lvol_iter_clones ...[2024-07-26 05:04:33.738391] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:07:14.651 passed 00:07:14.651 Test: lvol_refcnt ...passed 00:07:14.651 Test: lvol_names ...[2024-07-26 05:04:33.738801] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol f9146c3c-cbfd-4080-8a9b-ba0bd895d93c because it is still open 00:07:14.651 [2024-07-26 05:04:33.738958] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:14.651 [2024-07-26 05:04:33.739061] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:14.651 [2024-07-26 05:04:33.739253] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:07:14.651 passed 00:07:14.651 Test: lvol_create_thin_provisioned ...passed 00:07:14.651 Test: lvol_rename ...[2024-07-26 05:04:33.739615] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:14.651 [2024-07-26 05:04:33.739708] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:07:14.651 passed 00:07:14.651 Test: lvs_rename ...[2024-07-26 05:04:33.739917] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:07:14.651 passed 00:07:14.651 Test: lvol_inflate ...[2024-07-26 05:04:33.740077] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:14.651 passed 00:07:14.651 Test: lvol_decouple_parent ...[2024-07-26 05:04:33.740284] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:14.651 passed 00:07:14.651 Test: lvol_get_xattr ...passed 00:07:14.651 Test: lvol_esnap_reload ...passed 00:07:14.651 Test: lvol_esnap_create_bad_args ...[2024-07-26 05:04:33.740739] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:07:14.651 [2024-07-26 05:04:33.740800] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:14.651 [2024-07-26 05:04:33.740857] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:07:14.651 passed 00:07:14.651 Test: lvol_esnap_create_delete ...[2024-07-26 05:04:33.740948] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:14.651 [2024-07-26 05:04:33.741087] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:07:14.651 passed 00:07:14.651 Test: lvol_esnap_load_esnaps ...passed 00:07:14.651 Test: lvol_esnap_missing ...[2024-07-26 05:04:33.741396] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:07:14.651 [2024-07-26 05:04:33.741594] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:14.651 [2024-07-26 05:04:33.741642] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:14.651 passed 00:07:14.651 Test: lvol_esnap_hotplug ... 00:07:14.651 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:07:14.651 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:07:14.651 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:07:14.651 [2024-07-26 05:04:33.742180] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 9ac867b5-561e-452f-9306-501656a4c158: failed to create esnap bs_dev: error -12 00:07:14.651 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:07:14.652 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:07:14.652 [2024-07-26 05:04:33.742400] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol a9e170c7-9f26-4aae-8589-19b8f8e1dc3d: failed to create esnap bs_dev: error -12 00:07:14.652 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:07:14.652 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:07:14.652 [2024-07-26 05:04:33.742519] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 1cd0ea34-cbd0-4c2f-8716-edcc3c8c35f5: failed to create esnap bs_dev: error -12 00:07:14.652 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:07:14.652 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:07:14.652 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:07:14.652 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:07:14.652 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:07:14.652 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:07:14.652 passed 00:07:14.652 Test: lvol_get_by ...passed 00:07:14.652 00:07:14.652 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.652 suites 1 1 n/a 0 0 00:07:14.652 tests 34 34 34 0 0 00:07:14.652 asserts 1439 1439 1439 0 n/a 00:07:14.652 00:07:14.652 Elapsed time = 0.010 seconds 00:07:14.911 00:07:14.911 real 0m0.050s 00:07:14.911 user 0m0.028s 00:07:14.911 sys 0m0.022s 00:07:14.911 05:04:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.911 ************************************ 00:07:14.911 END TEST unittest_lvol 00:07:14.911 ************************************ 00:07:14.911 05:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:14.911 05:04:33 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:14.911 05:04:33 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:14.911 05:04:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.911 05:04:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.911 05:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:14.911 ************************************ 00:07:14.911 START TEST unittest_nvme_rdma 00:07:14.911 ************************************ 00:07:14.911 05:04:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:14.911 00:07:14.911 00:07:14.911 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.911 http://cunit.sourceforge.net/ 00:07:14.911 00:07:14.911 00:07:14.911 Suite: nvme_rdma 00:07:14.911 Test: test_nvme_rdma_build_sgl_request ...passed 00:07:14.911 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:07:14.911 Test: test_nvme_rdma_build_contig_request ...[2024-07-26 05:04:33.831318] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:07:14.911 [2024-07-26 05:04:33.831537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:14.911 [2024-07-26 05:04:33.831596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:07:14.911 passed 00:07:14.911 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:07:14.911 Test: test_nvme_rdma_create_reqs ...[2024-07-26 05:04:33.831689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:14.911 [2024-07-26 05:04:33.831790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:07:14.911 passed 00:07:14.911 Test: test_nvme_rdma_create_rsps ...passed 00:07:14.911 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-26 05:04:33.832169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:07:14.911 passed 00:07:14.911 Test: test_nvme_rdma_poller_create ...passed 00:07:14.911 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-26 05:04:33.832409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:14.911 [2024-07-26 05:04:33.832447] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:14.911 passed 00:07:14.911 Test: test_nvme_rdma_ctrlr_construct ...passed[2024-07-26 05:04:33.832613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:07:14.911 00:07:14.911 Test: test_nvme_rdma_req_put_and_get ...passed 00:07:14.911 Test: test_nvme_rdma_req_init ...passed 00:07:14.911 Test: test_nvme_rdma_validate_cm_event ...passed 00:07:14.911 Test: test_nvme_rdma_qpair_init ...passed 00:07:14.911 Test: test_nvme_rdma_qpair_submit_request ...[2024-07-26 05:04:33.832934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:07:14.911 [2024-07-26 05:04:33.832993] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:07:14.911 passed 00:07:14.911 Test: test_nvme_rdma_memory_domain ...passed 00:07:14.911 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:07:14.911 Test: test_rdma_get_memory_translation ...passed 00:07:14.911 Test: test_get_rdma_qpair_from_wc ...passed 00:07:14.911 Test: test_nvme_rdma_ctrlr_get_max_sges ...[2024-07-26 05:04:33.833213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:07:14.911 [2024-07-26 05:04:33.833314] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:07:14.911 [2024-07-26 05:04:33.833348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:07:14.911 passed 00:07:14.911 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:07:14.911 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-26 05:04:33.833485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:14.911 [2024-07-26 05:04:33.833519] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:14.911 [2024-07-26 05:04:33.833684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:14.911 [2024-07-26 05:04:33.833737] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:07:14.911 [2024-07-26 05:04:33.833756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x73015c80a030 on poll group 0x50b000000040 00:07:14.911 [2024-07-26 05:04:33.833801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:14.911 [2024-07-26 05:04:33.833834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:07:14.911 [2024-07-26 05:04:33.833866] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x73015c80a030 on poll group 0x50b000000040 00:07:14.912 [2024-07-26 05:04:33.833926] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:14.912 passed 00:07:14.912 00:07:14.912 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.912 suites 1 1 n/a 0 0 00:07:14.912 tests 22 22 22 0 0 00:07:14.912 asserts 412 412 412 0 n/a 00:07:14.912 00:07:14.912 Elapsed time = 0.003 seconds 00:07:14.912 00:07:14.912 real 0m0.033s 00:07:14.912 user 0m0.022s 00:07:14.912 sys 0m0.011s 00:07:14.912 05:04:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.912 05:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:14.912 ************************************ 00:07:14.912 END TEST unittest_nvme_rdma 00:07:14.912 ************************************ 00:07:14.912 05:04:33 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:14.912 05:04:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.912 05:04:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.912 05:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:14.912 ************************************ 00:07:14.912 START TEST unittest_nvmf_transport 00:07:14.912 ************************************ 00:07:14.912 05:04:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:14.912 00:07:14.912 00:07:14.912 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.912 http://cunit.sourceforge.net/ 00:07:14.912 00:07:14.912 00:07:14.912 Suite: nvmf 00:07:14.912 Test: test_spdk_nvmf_transport_create ...[2024-07-26 05:04:33.918055] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:07:14.912 [2024-07-26 05:04:33.918344] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:07:14.912 [2024-07-26 05:04:33.918420] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:07:14.912 passed 00:07:14.912 Test: test_nvmf_transport_poll_group_create ...[2024-07-26 05:04:33.918489] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:07:14.912 passed 00:07:14.912 Test: test_spdk_nvmf_transport_opts_init ...passed 00:07:14.912 Test: test_spdk_nvmf_transport_listen_ext ...[2024-07-26 05:04:33.918768] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:07:14.912 [2024-07-26 05:04:33.918813] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:07:14.912 [2024-07-26 05:04:33.918847] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:07:14.912 passed 00:07:14.912 00:07:14.912 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.912 suites 1 1 n/a 0 0 00:07:14.912 tests 4 4 4 0 0 00:07:14.912 asserts 49 49 49 0 n/a 00:07:14.912 00:07:14.912 Elapsed time = 0.001 seconds 00:07:14.912 00:07:14.912 real 0m0.036s 00:07:14.912 user 0m0.023s 00:07:14.912 sys 0m0.013s 00:07:14.912 05:04:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.912 ************************************ 00:07:14.912 END TEST unittest_nvmf_transport 00:07:14.912 ************************************ 00:07:14.912 05:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:14.912 05:04:33 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:14.912 05:04:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.912 05:04:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.912 05:04:33 -- common/autotest_common.sh@10 -- # set +x 00:07:14.912 ************************************ 00:07:14.912 START TEST unittest_rdma 00:07:14.912 ************************************ 00:07:14.912 05:04:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:14.912 00:07:14.912 00:07:14.912 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.912 http://cunit.sourceforge.net/ 00:07:14.912 00:07:14.912 00:07:14.912 Suite: rdma_common 00:07:14.912 Test: test_spdk_rdma_pd ...[2024-07-26 05:04:34.003394] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:14.912 [2024-07-26 05:04:34.003826] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:14.912 passed 00:07:14.912 00:07:14.912 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.912 suites 1 1 n/a 0 0 00:07:14.912 tests 1 1 1 0 0 00:07:14.912 asserts 31 31 31 0 n/a 00:07:14.912 00:07:14.912 Elapsed time = 0.001 seconds 00:07:15.172 00:07:15.172 real 0m0.035s 00:07:15.172 user 0m0.021s 00:07:15.172 sys 0m0.015s 00:07:15.172 05:04:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.172 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.172 ************************************ 00:07:15.172 END TEST unittest_rdma 00:07:15.172 ************************************ 00:07:15.172 05:04:34 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:15.172 05:04:34 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:15.172 05:04:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.172 05:04:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.172 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.172 ************************************ 00:07:15.172 START TEST unittest_nvme_cuse 00:07:15.172 ************************************ 00:07:15.172 05:04:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:15.172 00:07:15.172 00:07:15.172 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.172 http://cunit.sourceforge.net/ 00:07:15.172 00:07:15.172 00:07:15.172 Suite: nvme_cuse 00:07:15.172 Test: test_cuse_nvme_submit_io_read_write ...passed 00:07:15.172 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:07:15.172 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:07:15.172 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:07:15.172 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:07:15.172 Test: test_cuse_nvme_submit_io ...[2024-07-26 05:04:34.095338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:07:15.172 passed 00:07:15.172 Test: test_cuse_nvme_reset ...passed 00:07:15.172 Test: test_nvme_cuse_stop ...passed 00:07:15.172 Test: test_spdk_nvme_cuse_get_ctrlr_name ...[2024-07-26 05:04:34.095596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:07:15.172 passed 00:07:15.172 00:07:15.172 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.172 suites 1 1 n/a 0 0 00:07:15.172 tests 9 9 9 0 0 00:07:15.172 asserts 121 121 121 0 n/a 00:07:15.172 00:07:15.172 Elapsed time = 0.002 seconds 00:07:15.172 00:07:15.172 real 0m0.035s 00:07:15.172 user 0m0.017s 00:07:15.172 sys 0m0.018s 00:07:15.172 05:04:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.172 ************************************ 00:07:15.172 END TEST unittest_nvme_cuse 00:07:15.172 ************************************ 00:07:15.172 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.172 05:04:34 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:07:15.172 05:04:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.172 05:04:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.172 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.172 ************************************ 00:07:15.172 START TEST unittest_nvmf 00:07:15.172 ************************************ 00:07:15.172 05:04:34 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:07:15.172 05:04:34 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:07:15.172 00:07:15.172 00:07:15.172 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.172 http://cunit.sourceforge.net/ 00:07:15.172 00:07:15.172 00:07:15.172 Suite: nvmf 00:07:15.172 Test: test_get_log_page ...passed 00:07:15.172 Test: test_process_fabrics_cmd ...passed 00:07:15.172 Test: test_connect ...[2024-07-26 05:04:34.180657] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:15.172 [2024-07-26 05:04:34.181523] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:07:15.172 [2024-07-26 05:04:34.181586] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:07:15.172 [2024-07-26 05:04:34.181635] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:07:15.172 [2024-07-26 05:04:34.181676] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:07:15.172 [2024-07-26 05:04:34.181714] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:07:15.172 [2024-07-26 05:04:34.181754] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:07:15.172 [2024-07-26 05:04:34.181796] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:07:15.172 [2024-07-26 05:04:34.181824] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:07:15.172 [2024-07-26 05:04:34.181925] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:07:15.172 [2024-07-26 05:04:34.181992] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:07:15.172 [2024-07-26 05:04:34.182312] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:07:15.172 [2024-07-26 05:04:34.182382] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:07:15.172 [2024-07-26 05:04:34.182451] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:07:15.172 [2024-07-26 05:04:34.182515] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:07:15.172 [2024-07-26 05:04:34.182623] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:07:15.172 passed 00:07:15.172 Test: test_get_ns_id_desc_list ...[2024-07-26 05:04:34.182760] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:07:15.172 passed 00:07:15.172 Test: test_identify_ns ...[2024-07-26 05:04:34.183060] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:15.172 [2024-07-26 05:04:34.183262] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:07:15.172 [2024-07-26 05:04:34.183394] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:07:15.172 passed 00:07:15.172 Test: test_identify_ns_iocs_specific ...[2024-07-26 05:04:34.183547] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:15.172 [2024-07-26 05:04:34.183835] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:15.172 passed 00:07:15.172 Test: test_reservation_write_exclusive ...passed 00:07:15.172 Test: test_reservation_exclusive_access ...passed 00:07:15.172 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:07:15.172 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:07:15.172 Test: test_reservation_notification_log_page ...passed 00:07:15.172 Test: test_get_dif_ctx ...passed 00:07:15.172 Test: test_set_get_features ...passed 00:07:15.172 Test: test_identify_ctrlr ...passed[2024-07-26 05:04:34.184403] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:15.173 [2024-07-26 05:04:34.184466] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:15.173 [2024-07-26 05:04:34.184495] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:07:15.173 [2024-07-26 05:04:34.184536] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:07:15.173 00:07:15.173 Test: test_identify_ctrlr_iocs_specific ...passed 00:07:15.173 Test: test_custom_admin_cmd ...passed 00:07:15.173 Test: test_fused_compare_and_write ...[2024-07-26 05:04:34.185062] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:07:15.173 passed 00:07:15.173 Test: test_multi_async_event_reqs ...passed 00:07:15.173 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:07:15.173 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:07:15.173 Test: test_multi_async_events ...[2024-07-26 05:04:34.185130] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:15.173 [2024-07-26 05:04:34.185169] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:15.173 passed 00:07:15.173 Test: test_rae ...passed 00:07:15.173 Test: test_nvmf_ctrlr_create_destruct ...passed 00:07:15.173 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:07:15.173 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:07:15.173 Test: test_zcopy_read ...[2024-07-26 05:04:34.185788] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:07:15.173 passed 00:07:15.173 Test: test_zcopy_write ...passed 00:07:15.173 Test: test_nvmf_property_set ...passed 00:07:15.173 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:07:15.173 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-26 05:04:34.185991] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:15.173 [2024-07-26 05:04:34.186077] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:15.173 passed 00:07:15.173 00:07:15.173 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.173 suites 1 1 n/a 0 0 00:07:15.173 tests 30 30 30 0 0 00:07:15.173 asserts 885 885 885 0 n/a 00:07:15.173 00:07:15.173 Elapsed time = 0.006 seconds 00:07:15.173 [2024-07-26 05:04:34.186127] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:07:15.173 [2024-07-26 05:04:34.186171] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:07:15.173 [2024-07-26 05:04:34.186204] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:15.173 05:04:34 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:07:15.173 00:07:15.173 00:07:15.173 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.173 http://cunit.sourceforge.net/ 00:07:15.173 00:07:15.173 00:07:15.173 Suite: nvmf 00:07:15.173 Test: test_get_rw_params ...passed 00:07:15.173 Test: test_lba_in_range ...passed 00:07:15.173 Test: test_get_dif_ctx ...passed 00:07:15.173 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:07:15.173 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:07:15.173 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:07:15.173 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-26 05:04:34.216615] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:07:15.173 [2024-07-26 05:04:34.216816] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:07:15.173 [2024-07-26 05:04:34.216861] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:07:15.173 [2024-07-26 05:04:34.216925] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:07:15.173 [2024-07-26 05:04:34.216959] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:07:15.173 [2024-07-26 05:04:34.217033] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:07:15.173 passed 00:07:15.173 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:07:15.173 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:07:15.173 00:07:15.173 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.173 suites 1 1 n/a 0 0 00:07:15.173 tests 9 9 9 0 0 00:07:15.173 asserts 157 157 157 0 n/a 00:07:15.173 00:07:15.173 Elapsed time = 0.001 seconds 00:07:15.173 [2024-07-26 05:04:34.217081] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:07:15.173 [2024-07-26 05:04:34.217128] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:07:15.173 [2024-07-26 05:04:34.217172] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:07:15.173 05:04:34 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:07:15.173 00:07:15.173 00:07:15.173 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.173 http://cunit.sourceforge.net/ 00:07:15.173 00:07:15.173 00:07:15.173 Suite: nvmf 00:07:15.173 Test: test_discovery_log ...passed 00:07:15.173 Test: test_discovery_log_with_filters ...passed 00:07:15.173 00:07:15.173 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.173 suites 1 1 n/a 0 0 00:07:15.173 tests 2 2 2 0 0 00:07:15.173 asserts 238 238 238 0 n/a 00:07:15.173 00:07:15.173 Elapsed time = 0.003 seconds 00:07:15.173 05:04:34 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:07:15.433 00:07:15.433 00:07:15.433 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.433 http://cunit.sourceforge.net/ 00:07:15.433 00:07:15.433 00:07:15.433 Suite: nvmf 00:07:15.433 Test: nvmf_test_create_subsystem ...[2024-07-26 05:04:34.286617] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:07:15.433 [2024-07-26 05:04:34.286898] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:07:15.433 [2024-07-26 05:04:34.286946] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:07:15.433 [2024-07-26 05:04:34.286974] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:07:15.433 [2024-07-26 05:04:34.287029] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:07:15.433 [2024-07-26 05:04:34.287057] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:07:15.433 [2024-07-26 05:04:34.287169] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:07:15.433 passed 00:07:15.433 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-26 05:04:34.287276] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:07:15.433 [2024-07-26 05:04:34.287362] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:07:15.433 [2024-07-26 05:04:34.287388] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:15.433 [2024-07-26 05:04:34.287421] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:15.433 [2024-07-26 05:04:34.287630] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:07:15.433 passed 00:07:15.433 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:07:15.433 Test: test_reservation_register ...[2024-07-26 05:04:34.287674] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:07:15.433 [2024-07-26 05:04:34.287891] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:15.433 passed 00:07:15.433 Test: test_reservation_register_with_ptpl ...[2024-07-26 05:04:34.288019] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:07:15.433 passed 00:07:15.433 Test: test_reservation_acquire_preempt_1 ...passed 00:07:15.433 Test: test_reservation_acquire_release_with_ptpl ...[2024-07-26 05:04:34.289207] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:15.433 passed 00:07:15.433 Test: test_reservation_release ...passed 00:07:15.433 Test: test_reservation_unregister_notification ...passed 00:07:15.433 Test: test_reservation_release_notification ...[2024-07-26 05:04:34.291125] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:15.433 [2024-07-26 05:04:34.291331] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:15.433 [2024-07-26 05:04:34.291555] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:15.433 passed 00:07:15.433 Test: test_reservation_release_notification_write_exclusive ...[2024-07-26 05:04:34.291775] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:15.433 passed 00:07:15.433 Test: test_reservation_clear_notification ...passed 00:07:15.433 Test: test_reservation_preempt_notification ...[2024-07-26 05:04:34.291988] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:15.433 passed 00:07:15.434 Test: test_spdk_nvmf_ns_event ...[2024-07-26 05:04:34.292189] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:07:15.434 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:07:15.434 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-26 05:04:34.292988] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:07:15.434 [2024-07-26 05:04:34.293087] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_ns_reservation_report ...passed 00:07:15.434 Test: test_nvmf_nqn_is_valid ...[2024-07-26 05:04:34.293225] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:07:15.434 [2024-07-26 05:04:34.293301] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:07:15.434 [2024-07-26 05:04:34.293342] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:21e4f61d-557d-413c-92ef-5288fa70f5c": uuid is not the correct length 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_ns_reservation_restore ...passed 00:07:15.434 Test: test_nvmf_subsystem_state_change ...[2024-07-26 05:04:34.293375] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:07:15.434 [2024-07-26 05:04:34.293483] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_reservation_custom_ops ...passed 00:07:15.434 00:07:15.434 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.434 suites 1 1 n/a 0 0 00:07:15.434 tests 22 22 22 0 0 00:07:15.434 asserts 407 407 407 0 n/a 00:07:15.434 00:07:15.434 Elapsed time = 0.008 seconds 00:07:15.434 05:04:34 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:07:15.434 00:07:15.434 00:07:15.434 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.434 http://cunit.sourceforge.net/ 00:07:15.434 00:07:15.434 00:07:15.434 Suite: nvmf 00:07:15.434 Test: test_nvmf_tcp_create ...[2024-07-26 05:04:34.355050] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_tcp_destroy ...passed 00:07:15.434 Test: test_nvmf_tcp_poll_group_create ...passed 00:07:15.434 Test: test_nvmf_tcp_send_c2h_data ...passed 00:07:15.434 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:07:15.434 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:07:15.434 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:07:15.434 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-26 05:04:34.476824] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.476908] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2890b020 is same with the state(5) to be set 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:07:15.434 Test: test_nvmf_tcp_icreq_handle ...[2024-07-26 05:04:34.476944] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2890b020 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.476992] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.477172] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2890b020 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.477312] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:15.434 [2024-07-26 05:04:34.477396] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.477497] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2890d180 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.477552] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:15.434 [2024-07-26 05:04:34.477608] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2890d180 is same with the state(5) to be set 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_tcp_check_xfer_type ...passed 00:07:15.434 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-26 05:04:34.477678] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.477741] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2890d180 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.477803] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.477871] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2890d180 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.477997] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-26 05:04:34.478062] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.478096] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e289116a0 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.478143] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7e4e2880c8c0 00:07:15.434 [2024-07-26 05:04:34.478181] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.478215] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2880c020 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.478249] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7e4e2880c020 00:07:15.434 [2024-07-26 05:04:34.478284] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.478312] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2880c020 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.478366] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:07:15.434 [2024-07-26 05:04:34.478419] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.478478] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2880c020 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.478536] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:07:15.434 [2024-07-26 05:04:34.478594] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.478637] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2880c020 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.478703] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.478742] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2880c020 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.478845] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.478898] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2880c020 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.478957] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.479034] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2880c020 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.479107] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-26 05:04:34.479174] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2880c020 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.479228] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.479260] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2880c020 is same with the state(5) to be set 00:07:15.434 [2024-07-26 05:04:34.479296] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:15.434 [2024-07-26 05:04:34.479327] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4e2880c020 is same with the state(5) to be set 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-26 05:04:34.513790] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-26 05:04:34.513862] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:07:15.434 [2024-07-26 05:04:34.514864] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:07:15.434 passed 00:07:15.434 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-26 05:04:34.514929] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:07:15.434 passed 00:07:15.434 00:07:15.434 [2024-07-26 05:04:34.515676] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:07:15.434 [2024-07-26 05:04:34.515729] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:07:15.435 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.435 suites 1 1 n/a 0 0 00:07:15.435 tests 17 17 17 0 0 00:07:15.435 asserts 222 222 222 0 n/a 00:07:15.435 00:07:15.435 Elapsed time = 0.184 seconds 00:07:15.693 05:04:34 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:07:15.693 00:07:15.693 00:07:15.693 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.693 http://cunit.sourceforge.net/ 00:07:15.693 00:07:15.693 00:07:15.693 Suite: nvmf 00:07:15.693 Test: test_nvmf_tgt_create_poll_group ...passed 00:07:15.693 00:07:15.693 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.693 suites 1 1 n/a 0 0 00:07:15.693 tests 1 1 1 0 0 00:07:15.693 asserts 17 17 17 0 n/a 00:07:15.693 00:07:15.693 Elapsed time = 0.027 seconds 00:07:15.693 00:07:15.693 real 0m0.523s 00:07:15.693 user 0m0.224s 00:07:15.693 sys 0m0.296s 00:07:15.693 05:04:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.693 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.693 ************************************ 00:07:15.693 END TEST unittest_nvmf 00:07:15.693 ************************************ 00:07:15.693 05:04:34 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:15.693 05:04:34 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:15.693 05:04:34 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:15.693 05:04:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.693 05:04:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.693 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.693 ************************************ 00:07:15.693 START TEST unittest_nvmf_rdma 00:07:15.693 ************************************ 00:07:15.694 05:04:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:15.694 00:07:15.694 00:07:15.694 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.694 http://cunit.sourceforge.net/ 00:07:15.694 00:07:15.694 00:07:15.694 Suite: nvmf 00:07:15.694 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-26 05:04:34.762723] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:07:15.694 [2024-07-26 05:04:34.762952] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:07:15.694 [2024-07-26 05:04:34.763022] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:07:15.694 passed 00:07:15.694 Test: test_spdk_nvmf_rdma_request_process ...passed 00:07:15.694 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:07:15.694 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:07:15.694 Test: test_nvmf_rdma_opts_init ...passed 00:07:15.694 Test: test_nvmf_rdma_request_free_data ...passed 00:07:15.694 Test: test_nvmf_rdma_update_ibv_state ...passed 00:07:15.694 Test: test_nvmf_rdma_resources_create ...[2024-07-26 05:04:34.764422] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:07:15.694 [2024-07-26 05:04:34.764492] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:07:15.694 passed 00:07:15.694 Test: test_nvmf_rdma_qpair_compare ...passed 00:07:15.694 Test: test_nvmf_rdma_resize_cq ...[2024-07-26 05:04:34.765888] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:07:15.694 Using CQ of insufficient size may lead to CQ overrun 00:07:15.694 [2024-07-26 05:04:34.765935] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:07:15.694 passed 00:07:15.694 00:07:15.694 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.694 suites 1 1 n/a 0 0 00:07:15.694 tests 10 10 10 0 0 00:07:15.694 asserts 584 584 584 0 n/a 00:07:15.694 00:07:15.694 Elapsed time = 0.004 seconds 00:07:15.694 [2024-07-26 05:04:34.766025] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:15.694 00:07:15.694 real 0m0.044s 00:07:15.694 user 0m0.027s 00:07:15.694 sys 0m0.017s 00:07:15.694 05:04:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.694 ************************************ 00:07:15.694 END TEST unittest_nvmf_rdma 00:07:15.694 ************************************ 00:07:15.694 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.953 05:04:34 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:15.953 05:04:34 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:07:15.953 05:04:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.953 05:04:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.953 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.953 ************************************ 00:07:15.953 START TEST unittest_scsi 00:07:15.953 ************************************ 00:07:15.953 05:04:34 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:07:15.953 05:04:34 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:07:15.953 00:07:15.953 00:07:15.953 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.953 http://cunit.sourceforge.net/ 00:07:15.953 00:07:15.953 00:07:15.953 Suite: dev_suite 00:07:15.953 Test: dev_destruct_null_dev ...passed 00:07:15.953 Test: dev_destruct_zero_luns ...passed 00:07:15.953 Test: dev_destruct_null_lun ...passed 00:07:15.954 Test: dev_destruct_success ...passed 00:07:15.954 Test: dev_construct_num_luns_zero ...passed 00:07:15.954 Test: dev_construct_no_lun_zero ...passed 00:07:15.954 Test: dev_construct_null_lun ...passed 00:07:15.954 Test: dev_construct_name_too_long ...passed 00:07:15.954 Test: dev_construct_success ...[2024-07-26 05:04:34.861261] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:07:15.954 [2024-07-26 05:04:34.861512] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:07:15.954 [2024-07-26 05:04:34.861564] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:07:15.954 [2024-07-26 05:04:34.861613] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:07:15.954 passed 00:07:15.954 Test: dev_construct_success_lun_zero_not_first ...passed 00:07:15.954 Test: dev_queue_mgmt_task_success ...passed 00:07:15.954 Test: dev_queue_task_success ...passed 00:07:15.954 Test: dev_stop_success ...passed 00:07:15.954 Test: dev_add_port_max_ports ...passed 00:07:15.954 Test: dev_add_port_construct_failure1 ...[2024-07-26 05:04:34.861898] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:07:15.954 passed 00:07:15.954 Test: dev_add_port_construct_failure2 ...passed 00:07:15.954 Test: dev_add_port_success1 ...passed 00:07:15.954 Test: dev_add_port_success2 ...passed 00:07:15.954 Test: dev_add_port_success3 ...passed 00:07:15.954 Test: dev_find_port_by_id_num_ports_zero ...passed 00:07:15.954 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:07:15.954 Test: dev_find_port_by_id_success ...passed 00:07:15.954 Test: dev_add_lun_bdev_not_found ...passed 00:07:15.954 Test: dev_add_lun_no_free_lun_id ...[2024-07-26 05:04:34.861940] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:07:15.954 [2024-07-26 05:04:34.861981] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:07:15.954 [2024-07-26 05:04:34.862418] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:07:15.954 passed 00:07:15.954 Test: dev_add_lun_success1 ...passed 00:07:15.954 Test: dev_add_lun_success2 ...passed 00:07:15.954 Test: dev_check_pending_tasks ...passed 00:07:15.954 Test: dev_iterate_luns ...passed 00:07:15.954 Test: dev_find_free_lun ...passed 00:07:15.954 00:07:15.954 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.954 suites 1 1 n/a 0 0 00:07:15.954 tests 29 29 29 0 0 00:07:15.954 asserts 97 97 97 0 n/a 00:07:15.954 00:07:15.954 Elapsed time = 0.002 seconds 00:07:15.954 05:04:34 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:07:15.954 00:07:15.954 00:07:15.954 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.954 http://cunit.sourceforge.net/ 00:07:15.954 00:07:15.954 00:07:15.954 Suite: lun_suite 00:07:15.954 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:07:15.954 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:07:15.954 Test: lun_task_mgmt_execute_lun_reset ...passed 00:07:15.954 Test: lun_task_mgmt_execute_target_reset ...passed 00:07:15.954 Test: lun_task_mgmt_execute_invalid_case ...passed 00:07:15.954 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-07-26 05:04:34.900988] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:07:15.954 [2024-07-26 05:04:34.901259] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:07:15.954 [2024-07-26 05:04:34.901412] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:07:15.954 passed 00:07:15.954 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:07:15.954 Test: lun_append_task_null_lun_not_supported ...passed 00:07:15.954 Test: lun_execute_scsi_task_pending ...passed 00:07:15.954 Test: lun_execute_scsi_task_complete ...passed 00:07:15.954 Test: lun_execute_scsi_task_resize ...passed 00:07:15.954 Test: lun_destruct_success ...passed 00:07:15.954 Test: lun_construct_null_ctx ...passed 00:07:15.954 Test: lun_construct_success ...passed 00:07:15.954 Test: lun_reset_task_wait_scsi_task_complete ...[2024-07-26 05:04:34.901623] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:07:15.954 passed 00:07:15.954 Test: lun_reset_task_suspend_scsi_task ...passed 00:07:15.954 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:07:15.954 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:07:15.954 00:07:15.954 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.954 suites 1 1 n/a 0 0 00:07:15.954 tests 18 18 18 0 0 00:07:15.954 asserts 153 153 153 0 n/a 00:07:15.954 00:07:15.954 Elapsed time = 0.001 seconds 00:07:15.954 05:04:34 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:07:15.954 00:07:15.954 00:07:15.954 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.954 http://cunit.sourceforge.net/ 00:07:15.954 00:07:15.954 00:07:15.954 Suite: scsi_suite 00:07:15.954 Test: scsi_init ...passed 00:07:15.954 00:07:15.954 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.954 suites 1 1 n/a 0 0 00:07:15.954 tests 1 1 1 0 0 00:07:15.954 asserts 1 1 1 0 n/a 00:07:15.954 00:07:15.954 Elapsed time = 0.000 seconds 00:07:15.954 05:04:34 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:07:15.954 00:07:15.954 00:07:15.954 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.954 http://cunit.sourceforge.net/ 00:07:15.954 00:07:15.954 00:07:15.954 Suite: translation_suite 00:07:15.954 Test: mode_select_6_test ...passed 00:07:15.954 Test: mode_select_6_test2 ...passed 00:07:15.954 Test: mode_sense_6_test ...passed 00:07:15.954 Test: mode_sense_10_test ...passed 00:07:15.954 Test: inquiry_evpd_test ...passed 00:07:15.954 Test: inquiry_standard_test ...passed 00:07:15.954 Test: inquiry_overflow_test ...passed 00:07:15.954 Test: task_complete_test ...passed 00:07:15.954 Test: lba_range_test ...passed 00:07:15.954 Test: xfer_len_test ...passed 00:07:15.954 Test: xfer_test ...[2024-07-26 05:04:34.963605] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:07:15.954 passed 00:07:15.954 Test: scsi_name_padding_test ...passed 00:07:15.954 Test: get_dif_ctx_test ...passed 00:07:15.954 Test: unmap_split_test ...passed 00:07:15.954 00:07:15.954 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.954 suites 1 1 n/a 0 0 00:07:15.954 tests 14 14 14 0 0 00:07:15.954 asserts 1200 1200 1200 0 n/a 00:07:15.954 00:07:15.954 Elapsed time = 0.005 seconds 00:07:15.954 05:04:34 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:07:15.954 00:07:15.954 00:07:15.954 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.954 http://cunit.sourceforge.net/ 00:07:15.954 00:07:15.954 00:07:15.954 Suite: reservation_suite 00:07:15.954 Test: test_reservation_register ...passed 00:07:15.954 Test: test_reservation_reserve ...passed 00:07:15.954 Test: test_reservation_preempt_non_all_regs ...[2024-07-26 05:04:34.994958] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:15.954 [2024-07-26 05:04:34.995252] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:15.954 [2024-07-26 05:04:34.995319] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:07:15.954 [2024-07-26 05:04:34.995368] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:07:15.954 [2024-07-26 05:04:34.995434] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:15.954 passed 00:07:15.955 Test: test_reservation_preempt_all_regs ...passed 00:07:15.955 Test: test_reservation_cmds_conflict ...[2024-07-26 05:04:34.995504] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:07:15.955 [2024-07-26 05:04:34.995593] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:15.955 [2024-07-26 05:04:34.995704] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:15.955 [2024-07-26 05:04:34.995783] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:07:15.955 passed 00:07:15.955 Test: test_scsi2_reserve_release ...passed 00:07:15.955 Test: test_pr_with_scsi2_reserve_release ...passed 00:07:15.955 00:07:15.955 [2024-07-26 05:04:34.995818] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:15.955 [2024-07-26 05:04:34.995854] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:15.955 [2024-07-26 05:04:34.995882] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:15.955 [2024-07-26 05:04:34.995915] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:15.955 [2024-07-26 05:04:34.995976] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:15.955 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.955 suites 1 1 n/a 0 0 00:07:15.955 tests 7 7 7 0 0 00:07:15.955 asserts 257 257 257 0 n/a 00:07:15.955 00:07:15.955 Elapsed time = 0.001 seconds 00:07:15.955 00:07:15.955 real 0m0.166s 00:07:15.955 user 0m0.076s 00:07:15.955 sys 0m0.093s 00:07:15.955 05:04:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.955 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:07:15.955 ************************************ 00:07:15.955 END TEST unittest_scsi 00:07:15.955 ************************************ 00:07:15.955 05:04:35 -- unit/unittest.sh@276 -- # uname -s 00:07:15.955 05:04:35 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:07:15.955 05:04:35 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:07:15.955 05:04:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.955 05:04:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.955 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.214 ************************************ 00:07:16.214 START TEST unittest_sock 00:07:16.215 ************************************ 00:07:16.215 05:04:35 -- common/autotest_common.sh@1104 -- # unittest_sock 00:07:16.215 05:04:35 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:07:16.215 00:07:16.215 00:07:16.215 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.215 http://cunit.sourceforge.net/ 00:07:16.215 00:07:16.215 00:07:16.215 Suite: sock 00:07:16.215 Test: posix_sock ...passed 00:07:16.215 Test: ut_sock ...passed 00:07:16.215 Test: posix_sock_group ...passed 00:07:16.215 Test: ut_sock_group ...passed 00:07:16.215 Test: posix_sock_group_fairness ...passed 00:07:16.215 Test: _posix_sock_close ...passed 00:07:16.215 Test: sock_get_default_opts ...passed 00:07:16.215 Test: ut_sock_impl_get_set_opts ...passed 00:07:16.215 Test: posix_sock_impl_get_set_opts ...passed 00:07:16.215 Test: ut_sock_map ...passed 00:07:16.215 Test: override_impl_opts ...passed 00:07:16.215 Test: ut_sock_group_get_ctx ...passed 00:07:16.215 00:07:16.215 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.215 suites 1 1 n/a 0 0 00:07:16.215 tests 12 12 12 0 0 00:07:16.215 asserts 349 349 349 0 n/a 00:07:16.215 00:07:16.215 Elapsed time = 0.008 seconds 00:07:16.215 05:04:35 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:07:16.215 00:07:16.215 00:07:16.215 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.215 http://cunit.sourceforge.net/ 00:07:16.215 00:07:16.215 00:07:16.215 Suite: posix 00:07:16.215 Test: flush ...passed 00:07:16.215 00:07:16.215 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.215 suites 1 1 n/a 0 0 00:07:16.215 tests 1 1 1 0 0 00:07:16.215 asserts 28 28 28 0 n/a 00:07:16.215 00:07:16.215 Elapsed time = 0.000 seconds 00:07:16.215 05:04:35 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:16.215 00:07:16.215 real 0m0.100s 00:07:16.215 user 0m0.036s 00:07:16.215 sys 0m0.041s 00:07:16.215 ************************************ 00:07:16.215 END TEST unittest_sock 00:07:16.215 ************************************ 00:07:16.215 05:04:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.215 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.215 05:04:35 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:16.215 05:04:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.215 05:04:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.215 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.215 ************************************ 00:07:16.215 START TEST unittest_thread 00:07:16.215 ************************************ 00:07:16.215 05:04:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:16.215 00:07:16.215 00:07:16.215 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.215 http://cunit.sourceforge.net/ 00:07:16.215 00:07:16.215 00:07:16.215 Suite: io_channel 00:07:16.215 Test: thread_alloc ...passed 00:07:16.215 Test: thread_send_msg ...passed 00:07:16.215 Test: thread_poller ...passed 00:07:16.215 Test: poller_pause ...passed 00:07:16.215 Test: thread_for_each ...passed 00:07:16.215 Test: for_each_channel_remove ...passed 00:07:16.215 Test: for_each_channel_unreg ...[2024-07-26 05:04:35.263949] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x73e8f8b09640 already registered (old:0x513000000200 new:0x5130000003c0) 00:07:16.215 passed 00:07:16.215 Test: thread_name ...passed 00:07:16.215 Test: channel ...[2024-07-26 05:04:35.268921] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x5f01b1860120 00:07:16.215 passed 00:07:16.215 Test: channel_destroy_races ...passed 00:07:16.215 Test: thread_exit_test ...[2024-07-26 05:04:35.275207] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x518000005c80 got timeout, and move it to the exited state forcefully 00:07:16.215 passed 00:07:16.215 Test: thread_update_stats_test ...passed 00:07:16.215 Test: nested_channel ...passed 00:07:16.215 Test: device_unregister_and_thread_exit_race ...passed 00:07:16.215 Test: cache_closest_timed_poller ...passed 00:07:16.215 Test: multi_timed_pollers_have_same_expiration ...passed 00:07:16.215 Test: io_device_lookup ...passed 00:07:16.215 Test: spdk_spin ...[2024-07-26 05:04:35.288240] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:16.215 [2024-07-26 05:04:35.288346] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x73e8f8b0a020 00:07:16.215 [2024-07-26 05:04:35.288427] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:16.215 [2024-07-26 05:04:35.290547] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:16.215 [2024-07-26 05:04:35.290643] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x73e8f8b0a020 00:07:16.215 [2024-07-26 05:04:35.290691] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:16.215 [2024-07-26 05:04:35.290720] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x73e8f8b0a020 00:07:16.215 [2024-07-26 05:04:35.290746] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:16.215 [2024-07-26 05:04:35.290803] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x73e8f8b0a020 00:07:16.215 [2024-07-26 05:04:35.290823] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:07:16.215 [2024-07-26 05:04:35.290843] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x73e8f8b0a020 00:07:16.215 passed 00:07:16.215 Test: for_each_channel_and_thread_exit_race ...passed 00:07:16.215 Test: for_each_thread_and_thread_exit_race ...passed 00:07:16.215 00:07:16.215 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.215 suites 1 1 n/a 0 0 00:07:16.215 tests 20 20 20 0 0 00:07:16.215 asserts 409 409 409 0 n/a 00:07:16.215 00:07:16.215 Elapsed time = 0.061 seconds 00:07:16.215 00:07:16.215 real 0m0.105s 00:07:16.215 user 0m0.065s 00:07:16.215 sys 0m0.040s 00:07:16.475 05:04:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.475 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.475 ************************************ 00:07:16.475 END TEST unittest_thread 00:07:16.475 ************************************ 00:07:16.475 05:04:35 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:16.475 05:04:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.475 05:04:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.475 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.475 ************************************ 00:07:16.475 START TEST unittest_iobuf 00:07:16.475 ************************************ 00:07:16.475 05:04:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:16.475 00:07:16.475 00:07:16.475 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.475 http://cunit.sourceforge.net/ 00:07:16.475 00:07:16.475 00:07:16.475 Suite: io_channel 00:07:16.475 Test: iobuf ...passed 00:07:16.475 Test: iobuf_cache ...[2024-07-26 05:04:35.398946] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:16.475 [2024-07-26 05:04:35.399181] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:16.475 [2024-07-26 05:04:35.399244] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:07:16.475 [2024-07-26 05:04:35.399269] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:16.475 [2024-07-26 05:04:35.399323] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:16.475 [2024-07-26 05:04:35.399364] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:16.475 passed 00:07:16.475 00:07:16.475 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.475 suites 1 1 n/a 0 0 00:07:16.475 tests 2 2 2 0 0 00:07:16.475 asserts 107 107 107 0 n/a 00:07:16.475 00:07:16.475 Elapsed time = 0.006 seconds 00:07:16.475 00:07:16.475 real 0m0.037s 00:07:16.475 user 0m0.022s 00:07:16.475 sys 0m0.015s 00:07:16.475 05:04:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.475 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.475 ************************************ 00:07:16.475 END TEST unittest_iobuf 00:07:16.475 ************************************ 00:07:16.475 05:04:35 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:07:16.475 05:04:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.475 05:04:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.475 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.475 ************************************ 00:07:16.475 START TEST unittest_util 00:07:16.475 ************************************ 00:07:16.475 05:04:35 -- common/autotest_common.sh@1104 -- # unittest_util 00:07:16.475 05:04:35 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:07:16.475 00:07:16.475 00:07:16.475 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.475 http://cunit.sourceforge.net/ 00:07:16.475 00:07:16.475 00:07:16.475 Suite: base64 00:07:16.475 Test: test_base64_get_encoded_strlen ...passed 00:07:16.475 Test: test_base64_get_decoded_len ...passed 00:07:16.475 Test: test_base64_encode ...passed 00:07:16.475 Test: test_base64_decode ...passed 00:07:16.475 Test: test_base64_urlsafe_encode ...passed 00:07:16.475 Test: test_base64_urlsafe_decode ...passed 00:07:16.475 00:07:16.475 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.475 suites 1 1 n/a 0 0 00:07:16.475 tests 6 6 6 0 0 00:07:16.475 asserts 112 112 112 0 n/a 00:07:16.475 00:07:16.475 Elapsed time = 0.000 seconds 00:07:16.475 05:04:35 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:07:16.475 00:07:16.475 00:07:16.475 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.475 http://cunit.sourceforge.net/ 00:07:16.475 00:07:16.475 00:07:16.475 Suite: bit_array 00:07:16.475 Test: test_1bit ...passed 00:07:16.475 Test: test_64bit ...passed 00:07:16.475 Test: test_find ...passed 00:07:16.475 Test: test_resize ...passed 00:07:16.475 Test: test_errors ...passed 00:07:16.475 Test: test_count ...passed 00:07:16.475 Test: test_mask_store_load ...passed 00:07:16.475 Test: test_mask_clear ...passed 00:07:16.475 00:07:16.475 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.475 suites 1 1 n/a 0 0 00:07:16.475 tests 8 8 8 0 0 00:07:16.475 asserts 5075 5075 5075 0 n/a 00:07:16.475 00:07:16.475 Elapsed time = 0.002 seconds 00:07:16.475 05:04:35 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:07:16.475 00:07:16.475 00:07:16.475 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.475 http://cunit.sourceforge.net/ 00:07:16.475 00:07:16.475 00:07:16.475 Suite: cpuset 00:07:16.475 Test: test_cpuset ...passed 00:07:16.475 Test: test_cpuset_parse ...[2024-07-26 05:04:35.538683] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:07:16.475 [2024-07-26 05:04:35.538899] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:07:16.475 [2024-07-26 05:04:35.538937] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:07:16.475 [2024-07-26 05:04:35.538970] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:07:16.475 [2024-07-26 05:04:35.539024] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:07:16.475 [2024-07-26 05:04:35.539065] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:07:16.475 [2024-07-26 05:04:35.539095] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:07:16.475 [2024-07-26 05:04:35.539128] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:07:16.475 passed 00:07:16.475 Test: test_cpuset_fmt ...passed 00:07:16.475 00:07:16.475 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.475 suites 1 1 n/a 0 0 00:07:16.475 tests 3 3 3 0 0 00:07:16.475 asserts 65 65 65 0 n/a 00:07:16.475 00:07:16.475 Elapsed time = 0.002 seconds 00:07:16.475 05:04:35 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:07:16.475 00:07:16.475 00:07:16.475 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.475 http://cunit.sourceforge.net/ 00:07:16.475 00:07:16.475 00:07:16.475 Suite: crc16 00:07:16.475 Test: test_crc16_t10dif ...passed 00:07:16.475 Test: test_crc16_t10dif_seed ...passed 00:07:16.475 Test: test_crc16_t10dif_copy ...passed 00:07:16.475 00:07:16.475 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.475 suites 1 1 n/a 0 0 00:07:16.475 tests 3 3 3 0 0 00:07:16.475 asserts 5 5 5 0 n/a 00:07:16.475 00:07:16.476 Elapsed time = 0.000 seconds 00:07:16.476 05:04:35 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:07:16.736 00:07:16.736 00:07:16.736 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.736 http://cunit.sourceforge.net/ 00:07:16.736 00:07:16.736 00:07:16.736 Suite: crc32_ieee 00:07:16.736 Test: test_crc32_ieee ...passed 00:07:16.736 00:07:16.736 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.736 suites 1 1 n/a 0 0 00:07:16.736 tests 1 1 1 0 0 00:07:16.736 asserts 1 1 1 0 n/a 00:07:16.736 00:07:16.736 Elapsed time = 0.000 seconds 00:07:16.736 05:04:35 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:07:16.736 00:07:16.736 00:07:16.736 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.736 http://cunit.sourceforge.net/ 00:07:16.736 00:07:16.736 00:07:16.736 Suite: crc32c 00:07:16.736 Test: test_crc32c ...passed 00:07:16.736 Test: test_crc32c_nvme ...passed 00:07:16.736 00:07:16.736 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.736 suites 1 1 n/a 0 0 00:07:16.736 tests 2 2 2 0 0 00:07:16.736 asserts 16 16 16 0 n/a 00:07:16.736 00:07:16.736 Elapsed time = 0.000 seconds 00:07:16.736 05:04:35 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:07:16.736 00:07:16.736 00:07:16.736 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.736 http://cunit.sourceforge.net/ 00:07:16.736 00:07:16.736 00:07:16.736 Suite: crc64 00:07:16.736 Test: test_crc64_nvme ...passed 00:07:16.736 00:07:16.736 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.736 suites 1 1 n/a 0 0 00:07:16.736 tests 1 1 1 0 0 00:07:16.736 asserts 4 4 4 0 n/a 00:07:16.736 00:07:16.736 Elapsed time = 0.000 seconds 00:07:16.736 05:04:35 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:07:16.736 00:07:16.736 00:07:16.736 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.736 http://cunit.sourceforge.net/ 00:07:16.736 00:07:16.736 00:07:16.736 Suite: string 00:07:16.736 Test: test_parse_ip_addr ...passed 00:07:16.736 Test: test_str_chomp ...passed 00:07:16.736 Test: test_parse_capacity ...passed 00:07:16.736 Test: test_sprintf_append_realloc ...passed 00:07:16.736 Test: test_strtol ...passed 00:07:16.736 Test: test_strtoll ...passed 00:07:16.736 Test: test_strarray ...passed 00:07:16.736 Test: test_strcpy_replace ...passed 00:07:16.736 00:07:16.736 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.736 suites 1 1 n/a 0 0 00:07:16.736 tests 8 8 8 0 0 00:07:16.736 asserts 161 161 161 0 n/a 00:07:16.736 00:07:16.736 Elapsed time = 0.001 seconds 00:07:16.736 05:04:35 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:07:16.736 00:07:16.737 00:07:16.737 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.737 http://cunit.sourceforge.net/ 00:07:16.737 00:07:16.737 00:07:16.737 Suite: dif 00:07:16.737 Test: dif_generate_and_verify_test ...[2024-07-26 05:04:35.707035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:16.737 [2024-07-26 05:04:35.707477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:16.737 [2024-07-26 05:04:35.707814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:16.737 [2024-07-26 05:04:35.708149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:16.737 [2024-07-26 05:04:35.708473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:16.737 [2024-07-26 05:04:35.708801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:16.737 passed 00:07:16.737 Test: dif_disable_check_test ...[2024-07-26 05:04:35.709993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:16.737 [2024-07-26 05:04:35.710337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:16.737 [2024-07-26 05:04:35.710653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:16.737 passed 00:07:16.737 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-26 05:04:35.711843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:07:16.737 [2024-07-26 05:04:35.712208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:07:16.737 [2024-07-26 05:04:35.712554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:07:16.737 [2024-07-26 05:04:35.712898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:07:16.737 [2024-07-26 05:04:35.713285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:16.737 [2024-07-26 05:04:35.713641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:16.737 [2024-07-26 05:04:35.713973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:16.737 [2024-07-26 05:04:35.714335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:16.737 [2024-07-26 05:04:35.714686] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:16.737 [2024-07-26 05:04:35.715058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:16.737 [2024-07-26 05:04:35.715397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:16.737 passed 00:07:16.737 Test: dif_apptag_mask_test ...[2024-07-26 05:04:35.715728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:16.737 [2024-07-26 05:04:35.716041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:16.737 passed 00:07:16.737 Test: dif_sec_512_md_0_error_test ...passed 00:07:16.737 Test: dif_sec_4096_md_0_error_test ...[2024-07-26 05:04:35.716239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:16.737 passed 00:07:16.737 Test: dif_sec_4100_md_128_error_test ...passed 00:07:16.737 Test: dif_guard_seed_test ...[2024-07-26 05:04:35.716275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:16.737 [2024-07-26 05:04:35.716309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:16.737 [2024-07-26 05:04:35.716334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:16.737 [2024-07-26 05:04:35.716359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:16.737 passed 00:07:16.737 Test: dif_guard_value_test ...passed 00:07:16.737 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:07:16.737 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:07:16.737 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:16.737 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:16.737 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:16.737 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:07:16.737 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:16.737 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:16.737 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:07:16.737 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:16.737 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:07:16.737 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:07:16.737 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:16.737 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:16.737 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:16.737 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:16.737 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:16.737 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:16.737 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-26 05:04:35.760856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fdcc, Actual=fd4c 00:07:16.737 [2024-07-26 05:04:35.763336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fea1, Actual=fe21 00:07:16.737 [2024-07-26 05:04:35.765789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:16.737 [2024-07-26 05:04:35.768231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:16.737 [2024-07-26 05:04:35.770679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:16.737 [2024-07-26 05:04:35.773130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:16.737 [2024-07-26 05:04:35.775574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=e5e3 00:07:16.737 [2024-07-26 05:04:35.777426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe21, Actual=7c21 00:07:16.737 [2024-07-26 05:04:35.779268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1a3753ed, Actual=1ab753ed 00:07:16.737 [2024-07-26 05:04:35.781720] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38d74660, Actual=38574660 00:07:16.737 [2024-07-26 05:04:35.784175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:16.737 [2024-07-26 05:04:35.786633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:16.737 [2024-07-26 05:04:35.789088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000000005d 00:07:16.737 [2024-07-26 05:04:35.791546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000000005d 00:07:16.737 [2024-07-26 05:04:35.793989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=3e96017d 00:07:16.737 [2024-07-26 05:04:35.795831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38574660, Actual=a13aaa8a 00:07:16.737 [2024-07-26 05:04:35.797665] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:16.737 [2024-07-26 05:04:35.800101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:07:16.737 [2024-07-26 05:04:35.802553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:16.737 [2024-07-26 05:04:35.805011] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:16.737 [2024-07-26 05:04:35.807460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:16.737 [2024-07-26 05:04:35.809914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:16.737 [2024-07-26 05:04:35.812351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=bd71905a10ce20a9 00:07:16.738 [2024-07-26 05:04:35.814201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d4837a266, Actual=2a6922b970248a79 00:07:16.738 passed 00:07:16.738 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-26 05:04:35.815141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:07:16.738 [2024-07-26 05:04:35.815438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fea1, Actual=fe21 00:07:16.738 [2024-07-26 05:04:35.815731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.816039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.816332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.738 [2024-07-26 05:04:35.816625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.738 [2024-07-26 05:04:35.816921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e5e3 00:07:16.738 [2024-07-26 05:04:35.817161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=7c21 00:07:16.738 [2024-07-26 05:04:35.817384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:07:16.738 [2024-07-26 05:04:35.817680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38d74660, Actual=38574660 00:07:16.738 [2024-07-26 05:04:35.817986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.818293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.818567] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:07:16.738 [2024-07-26 05:04:35.818853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:07:16.738 [2024-07-26 05:04:35.819154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3e96017d 00:07:16.738 [2024-07-26 05:04:35.819379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a13aaa8a 00:07:16.738 [2024-07-26 05:04:35.819619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:16.738 [2024-07-26 05:04:35.819909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:07:16.738 [2024-07-26 05:04:35.820205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.820505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.820792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.738 [2024-07-26 05:04:35.821156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.738 [2024-07-26 05:04:35.821470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=bd71905a10ce20a9 00:07:16.738 [2024-07-26 05:04:35.821711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2a6922b970248a79 00:07:16.738 passed 00:07:16.738 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-26 05:04:35.821975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:07:16.738 [2024-07-26 05:04:35.822297] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fea1, Actual=fe21 00:07:16.738 [2024-07-26 05:04:35.822601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.822886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.823189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.738 [2024-07-26 05:04:35.823491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.738 [2024-07-26 05:04:35.823777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e5e3 00:07:16.738 [2024-07-26 05:04:35.824028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=7c21 00:07:16.738 [2024-07-26 05:04:35.824250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:07:16.738 [2024-07-26 05:04:35.824536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38d74660, Actual=38574660 00:07:16.738 [2024-07-26 05:04:35.824819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.825141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.825445] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:07:16.738 [2024-07-26 05:04:35.825757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:07:16.738 [2024-07-26 05:04:35.826061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3e96017d 00:07:16.738 [2024-07-26 05:04:35.826287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a13aaa8a 00:07:16.738 [2024-07-26 05:04:35.826509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:16.738 [2024-07-26 05:04:35.826800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:07:16.738 [2024-07-26 05:04:35.827107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.827408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.827695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.738 [2024-07-26 05:04:35.828023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.738 [2024-07-26 05:04:35.828315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=bd71905a10ce20a9 00:07:16.738 [2024-07-26 05:04:35.828539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2a6922b970248a79 00:07:16.738 passed 00:07:16.738 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-26 05:04:35.828811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:07:16.738 [2024-07-26 05:04:35.829116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fea1, Actual=fe21 00:07:16.738 [2024-07-26 05:04:35.829419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.829725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.830033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.738 [2024-07-26 05:04:35.830334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.738 [2024-07-26 05:04:35.830620] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e5e3 00:07:16.738 [2024-07-26 05:04:35.830845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=7c21 00:07:16.738 [2024-07-26 05:04:35.831074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:07:16.738 [2024-07-26 05:04:35.831362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38d74660, Actual=38574660 00:07:16.738 [2024-07-26 05:04:35.831657] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.831964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.832268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:07:16.738 [2024-07-26 05:04:35.832554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:07:16.738 [2024-07-26 05:04:35.832857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3e96017d 00:07:16.738 [2024-07-26 05:04:35.833101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a13aaa8a 00:07:16.738 [2024-07-26 05:04:35.833318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:16.738 [2024-07-26 05:04:35.833626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:07:16.738 [2024-07-26 05:04:35.833940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.834249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.738 [2024-07-26 05:04:35.834545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.738 [2024-07-26 05:04:35.834858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.739 [2024-07-26 05:04:35.835167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=bd71905a10ce20a9 00:07:16.739 [2024-07-26 05:04:35.835400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2a6922b970248a79 00:07:16.739 passed 00:07:16.739 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-26 05:04:35.835641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:07:16.739 [2024-07-26 05:04:35.835949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fea1, Actual=fe21 00:07:16.739 [2024-07-26 05:04:35.836256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.739 [2024-07-26 05:04:35.836549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.739 [2024-07-26 05:04:35.836827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.739 [2024-07-26 05:04:35.837141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.739 [2024-07-26 05:04:35.837446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e5e3 00:07:16.739 [2024-07-26 05:04:35.837698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=7c21 00:07:16.739 passed 00:07:16.739 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-26 05:04:35.837968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:07:16.739 [2024-07-26 05:04:35.838284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38d74660, Actual=38574660 00:07:16.739 [2024-07-26 05:04:35.838576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.739 [2024-07-26 05:04:35.838878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.739 [2024-07-26 05:04:35.839183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:07:16.739 [2024-07-26 05:04:35.839479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:07:16.739 [2024-07-26 05:04:35.839769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3e96017d 00:07:16.739 [2024-07-26 05:04:35.840021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a13aaa8a 00:07:16.739 [2024-07-26 05:04:35.840283] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:16.739 [2024-07-26 05:04:35.840577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:07:16.739 [2024-07-26 05:04:35.840872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.739 [2024-07-26 05:04:35.841170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.739 [2024-07-26 05:04:35.841472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.739 [2024-07-26 05:04:35.841779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.739 [2024-07-26 05:04:35.842079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=bd71905a10ce20a9 00:07:16.739 [2024-07-26 05:04:35.842309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2a6922b970248a79 00:07:16.739 passed 00:07:16.739 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-26 05:04:35.842561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fdcc, Actual=fd4c 00:07:16.739 [2024-07-26 05:04:35.842879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fea1, Actual=fe21 00:07:16.739 [2024-07-26 05:04:35.843169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.739 [2024-07-26 05:04:35.843470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:16.739 [2024-07-26 05:04:35.843747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.739 [2024-07-26 05:04:35.844073] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:16.739 [2024-07-26 05:04:35.844380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e5e3 00:07:17.000 [2024-07-26 05:04:35.844603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=7c21 00:07:17.000 passed 00:07:17.000 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-26 05:04:35.844870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a3753ed, Actual=1ab753ed 00:07:17.000 [2024-07-26 05:04:35.845189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38d74660, Actual=38574660 00:07:17.000 [2024-07-26 05:04:35.845485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:17.000 [2024-07-26 05:04:35.845770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:17.000 [2024-07-26 05:04:35.846078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:07:17.000 [2024-07-26 05:04:35.846377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000000058 00:07:17.000 [2024-07-26 05:04:35.846654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3e96017d 00:07:17.000 [2024-07-26 05:04:35.846879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a13aaa8a 00:07:17.000 [2024-07-26 05:04:35.847141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.000 [2024-07-26 05:04:35.847438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88810a2d4837a266, Actual=88010a2d4837a266 00:07:17.000 [2024-07-26 05:04:35.847731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:17.000 [2024-07-26 05:04:35.848042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8 00:07:17.000 [2024-07-26 05:04:35.848337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:17.000 [2024-07-26 05:04:35.848630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800058 00:07:17.000 [2024-07-26 05:04:35.848906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=bd71905a10ce20a9 00:07:17.000 [2024-07-26 05:04:35.849148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2a6922b970248a79 00:07:17.000 passed 00:07:17.000 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:07:17.000 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:17.000 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:17.000 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:17.000 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:17.000 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:17.000 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:17.000 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:17.000 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:17.000 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-26 05:04:35.893475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fdcc, Actual=fd4c 00:07:17.000 [2024-07-26 05:04:35.894589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=19ee, Actual=196e 00:07:17.000 [2024-07-26 05:04:35.895694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.000 [2024-07-26 05:04:35.896800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.000 [2024-07-26 05:04:35.897897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:17.000 [2024-07-26 05:04:35.899018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:17.000 [2024-07-26 05:04:35.900119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=e5e3 00:07:17.000 [2024-07-26 05:04:35.901211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=7341 00:07:17.000 [2024-07-26 05:04:35.902312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1a3753ed, Actual=1ab753ed 00:07:17.000 [2024-07-26 05:04:35.903437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f3e99eb8, Actual=f3699eb8 00:07:17.000 [2024-07-26 05:04:35.904571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.000 [2024-07-26 05:04:35.905695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.000 [2024-07-26 05:04:35.906801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000000005d 00:07:17.000 [2024-07-26 05:04:35.907934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000000005d 00:07:17.000 [2024-07-26 05:04:35.909060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=3e96017d 00:07:17.000 [2024-07-26 05:04:35.910175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=c3508972 00:07:17.000 [2024-07-26 05:04:35.911265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.000 [2024-07-26 05:04:35.912379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=d7c849fd26be2224, Actual=d74849fd26be2224 00:07:17.000 [2024-07-26 05:04:35.913514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.000 [2024-07-26 05:04:35.914637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.000 [2024-07-26 05:04:35.915742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:17.000 [2024-07-26 05:04:35.916873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:17.000 [2024-07-26 05:04:35.918015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=bd71905a10ce20a9 00:07:17.000 passed 00:07:17.000 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-26 05:04:35.919128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=1fa2c76c57b8992f 00:07:17.001 [2024-07-26 05:04:35.919453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fdcc, Actual=fd4c 00:07:17.001 [2024-07-26 05:04:35.919719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7a6f, Actual=7aef 00:07:17.001 [2024-07-26 05:04:35.919986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.920249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.920511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:17.001 [2024-07-26 05:04:35.920773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:17.001 [2024-07-26 05:04:35.921041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=e5e3 00:07:17.001 [2024-07-26 05:04:35.921300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=10c0 00:07:17.001 [2024-07-26 05:04:35.921558] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1a3753ed, Actual=1ab753ed 00:07:17.001 [2024-07-26 05:04:35.921819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3269ab4d, Actual=32e9ab4d 00:07:17.001 [2024-07-26 05:04:35.922104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.922414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.922670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000000059 00:07:17.001 [2024-07-26 05:04:35.922931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000000059 00:07:17.001 [2024-07-26 05:04:35.923200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=3e96017d 00:07:17.001 [2024-07-26 05:04:35.923454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=2d0bc87 00:07:17.001 [2024-07-26 05:04:35.923720] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.001 [2024-07-26 05:04:35.923978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=222a466e0051a67b, Actual=22aa466e0051a67b 00:07:17.001 [2024-07-26 05:04:35.924273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.924539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.924805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:17.001 [2024-07-26 05:04:35.925066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:17.001 [2024-07-26 05:04:35.925321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=bd71905a10ce20a9 00:07:17.001 [2024-07-26 05:04:35.925601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=ea40c8ff71571d70 00:07:17.001 passed 00:07:17.001 Test: dix_sec_512_md_0_error ...passed 00:07:17.001 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-26 05:04:35.925641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:17.001 passed 00:07:17.001 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:17.001 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:17.001 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:17.001 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:17.001 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:17.001 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:17.001 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:17.001 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:17.001 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-26 05:04:35.969288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fdcc, Actual=fd4c 00:07:17.001 [2024-07-26 05:04:35.970411] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=19ee, Actual=196e 00:07:17.001 [2024-07-26 05:04:35.971510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.972627] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.973744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:17.001 [2024-07-26 05:04:35.974854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:17.001 [2024-07-26 05:04:35.975975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=e5e3 00:07:17.001 [2024-07-26 05:04:35.977077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=7341 00:07:17.001 [2024-07-26 05:04:35.978199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1a3753ed, Actual=1ab753ed 00:07:17.001 [2024-07-26 05:04:35.979327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f3e99eb8, Actual=f3699eb8 00:07:17.001 [2024-07-26 05:04:35.980424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.981536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.982630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000000005d 00:07:17.001 [2024-07-26 05:04:35.983736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000000005d 00:07:17.001 [2024-07-26 05:04:35.984825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=3e96017d 00:07:17.001 [2024-07-26 05:04:35.985947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=c3508972 00:07:17.001 [2024-07-26 05:04:35.987068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.001 [2024-07-26 05:04:35.988154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=d7c849fd26be2224, Actual=d74849fd26be2224 00:07:17.001 [2024-07-26 05:04:35.989273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.990368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.991480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:17.001 [2024-07-26 05:04:35.992606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80005d 00:07:17.001 [2024-07-26 05:04:35.993733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=bd71905a10ce20a9 00:07:17.001 passed 00:07:17.001 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-26 05:04:35.994836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=1fa2c76c57b8992f 00:07:17.001 [2024-07-26 05:04:35.995200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fdcc, Actual=fd4c 00:07:17.001 [2024-07-26 05:04:35.995458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7a6f, Actual=7aef 00:07:17.001 [2024-07-26 05:04:35.995708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.995972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.996259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:17.001 [2024-07-26 05:04:35.996509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:17.001 [2024-07-26 05:04:35.996776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=e5e3 00:07:17.001 [2024-07-26 05:04:35.997034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=10c0 00:07:17.001 [2024-07-26 05:04:35.997331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1a3753ed, Actual=1ab753ed 00:07:17.001 [2024-07-26 05:04:35.997602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3269ab4d, Actual=32e9ab4d 00:07:17.001 [2024-07-26 05:04:35.997875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.998154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.001 [2024-07-26 05:04:35.998423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000000059 00:07:17.001 [2024-07-26 05:04:35.998691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000000059 00:07:17.001 [2024-07-26 05:04:35.998952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=3e96017d 00:07:17.001 [2024-07-26 05:04:35.999212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=2d0bc87 00:07:17.001 [2024-07-26 05:04:35.999480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a5f6a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.001 [2024-07-26 05:04:35.999744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=222a466e0051a67b, Actual=22aa466e0051a67b 00:07:17.001 [2024-07-26 05:04:36.000025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.002 [2024-07-26 05:04:36.000287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:17.002 [2024-07-26 05:04:36.000549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:17.002 [2024-07-26 05:04:36.000799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:17.002 [2024-07-26 05:04:36.001067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b790691e7737ebcc 00:07:17.002 [2024-07-26 05:04:36.001312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7de7a41028d9a14d 00:07:17.002 passed 00:07:17.002 Test: set_md_interleave_iovs_test ...passed 00:07:17.002 Test: set_md_interleave_iovs_split_test ...passed 00:07:17.002 Test: dif_generate_stream_pi_16_test ...passed 00:07:17.002 Test: dif_generate_stream_test ...passed 00:07:17.002 Test: set_md_interleave_iovs_alignment_test ...[2024-07-26 05:04:36.009294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:07:17.002 passed 00:07:17.002 Test: dif_generate_split_test ...passed 00:07:17.002 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:07:17.002 Test: dif_verify_split_test ...passed 00:07:17.002 Test: dif_verify_stream_multi_segments_test ...passed 00:07:17.002 Test: update_crc32c_pi_16_test ...passed 00:07:17.002 Test: update_crc32c_test ...passed 00:07:17.002 Test: dif_update_crc32c_split_test ...passed 00:07:17.002 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:07:17.002 Test: get_range_with_md_test ...passed 00:07:17.002 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:07:17.002 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:07:17.002 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:17.002 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:07:17.002 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:07:17.002 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:17.002 Test: dif_generate_and_verify_unmap_test ...passed 00:07:17.002 00:07:17.002 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.002 suites 1 1 n/a 0 0 00:07:17.002 tests 79 79 79 0 0 00:07:17.002 asserts 3584 3584 3584 0 n/a 00:07:17.002 00:07:17.002 Elapsed time = 0.350 seconds 00:07:17.002 05:04:36 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:07:17.002 00:07:17.002 00:07:17.002 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.002 http://cunit.sourceforge.net/ 00:07:17.002 00:07:17.002 00:07:17.002 Suite: iov 00:07:17.002 Test: test_single_iov ...passed 00:07:17.002 Test: test_simple_iov ...passed 00:07:17.002 Test: test_complex_iov ...passed 00:07:17.002 Test: test_iovs_to_buf ...passed 00:07:17.002 Test: test_buf_to_iovs ...passed 00:07:17.002 Test: test_memset ...passed 00:07:17.002 Test: test_iov_one ...passed 00:07:17.002 Test: test_iov_xfer ...passed 00:07:17.002 00:07:17.002 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.002 suites 1 1 n/a 0 0 00:07:17.002 tests 8 8 8 0 0 00:07:17.002 asserts 156 156 156 0 n/a 00:07:17.002 00:07:17.002 Elapsed time = 0.000 seconds 00:07:17.002 05:04:36 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:07:17.261 00:07:17.261 00:07:17.261 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.261 http://cunit.sourceforge.net/ 00:07:17.261 00:07:17.261 00:07:17.261 Suite: math 00:07:17.261 Test: test_serial_number_arithmetic ...passed 00:07:17.261 Suite: erase 00:07:17.262 Test: test_memset_s ...passed 00:07:17.262 00:07:17.262 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.262 suites 2 2 n/a 0 0 00:07:17.262 tests 2 2 2 0 0 00:07:17.262 asserts 18 18 18 0 n/a 00:07:17.262 00:07:17.262 Elapsed time = 0.000 seconds 00:07:17.262 05:04:36 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:07:17.262 00:07:17.262 00:07:17.262 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.262 http://cunit.sourceforge.net/ 00:07:17.262 00:07:17.262 00:07:17.262 Suite: pipe 00:07:17.262 Test: test_create_destroy ...passed 00:07:17.262 Test: test_write_get_buffer ...passed 00:07:17.262 Test: test_write_advance ...passed 00:07:17.262 Test: test_read_get_buffer ...passed 00:07:17.262 Test: test_read_advance ...passed 00:07:17.262 Test: test_data ...passed 00:07:17.262 00:07:17.262 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.262 suites 1 1 n/a 0 0 00:07:17.262 tests 6 6 6 0 0 00:07:17.262 asserts 250 250 250 0 n/a 00:07:17.262 00:07:17.262 Elapsed time = 0.000 seconds 00:07:17.262 05:04:36 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:07:17.262 00:07:17.262 00:07:17.262 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.262 http://cunit.sourceforge.net/ 00:07:17.262 00:07:17.262 00:07:17.262 Suite: xor 00:07:17.262 Test: test_xor_gen ...passed 00:07:17.262 00:07:17.262 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.262 suites 1 1 n/a 0 0 00:07:17.262 tests 1 1 1 0 0 00:07:17.262 asserts 17 17 17 0 n/a 00:07:17.262 00:07:17.262 Elapsed time = 0.007 seconds 00:07:17.262 00:07:17.262 real 0m0.745s 00:07:17.262 user 0m0.545s 00:07:17.262 sys 0m0.206s 00:07:17.262 05:04:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.262 05:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.262 ************************************ 00:07:17.262 END TEST unittest_util 00:07:17.262 ************************************ 00:07:17.262 05:04:36 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:17.262 05:04:36 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:17.262 05:04:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.262 05:04:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.262 05:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.262 ************************************ 00:07:17.262 START TEST unittest_vhost 00:07:17.262 ************************************ 00:07:17.262 05:04:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:17.262 00:07:17.262 00:07:17.262 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.262 http://cunit.sourceforge.net/ 00:07:17.262 00:07:17.262 00:07:17.262 Suite: vhost_suite 00:07:17.262 Test: desc_to_iov_test ...[2024-07-26 05:04:36.294406] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:07:17.262 passed 00:07:17.262 Test: create_controller_test ...[2024-07-26 05:04:36.299464] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:17.262 [2024-07-26 05:04:36.299586] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:07:17.262 [2024-07-26 05:04:36.299717] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:17.262 [2024-07-26 05:04:36.299800] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:07:17.262 [2024-07-26 05:04:36.299841] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:07:17.262 [2024-07-26 05:04:36.299918] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxpassed 00:07:17.262 Test: session_find_by_vid_test ...[2024-07-26 05:04:36.301075] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:07:17.262 passed 00:07:17.262 Test: remove_controller_test ...passed 00:07:17.262 Test: vq_avail_ring_get_test ...[2024-07-26 05:04:36.303410] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:07:17.262 passed 00:07:17.262 Test: vq_packed_ring_test ...passed 00:07:17.262 Test: vhost_blk_construct_test ...passed 00:07:17.262 00:07:17.262 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.262 suites 1 1 n/a 0 0 00:07:17.262 tests 7 7 7 0 0 00:07:17.262 asserts 145 145 145 0 n/a 00:07:17.262 00:07:17.262 Elapsed time = 0.014 seconds 00:07:17.262 00:07:17.262 real 0m0.053s 00:07:17.262 user 0m0.027s 00:07:17.262 sys 0m0.026s 00:07:17.262 05:04:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.262 05:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.262 ************************************ 00:07:17.262 END TEST unittest_vhost 00:07:17.262 ************************************ 00:07:17.262 05:04:36 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:17.262 05:04:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.262 05:04:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.262 05:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.522 ************************************ 00:07:17.522 START TEST unittest_dma 00:07:17.522 ************************************ 00:07:17.522 05:04:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:17.522 00:07:17.522 00:07:17.522 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.522 http://cunit.sourceforge.net/ 00:07:17.522 00:07:17.522 00:07:17.522 Suite: dma_suite 00:07:17.522 Test: test_dma ...[2024-07-26 05:04:36.392473] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:07:17.522 passed 00:07:17.522 00:07:17.522 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.522 suites 1 1 n/a 0 0 00:07:17.522 tests 1 1 1 0 0 00:07:17.522 asserts 50 50 50 0 n/a 00:07:17.522 00:07:17.522 Elapsed time = 0.000 seconds 00:07:17.522 00:07:17.522 real 0m0.027s 00:07:17.522 user 0m0.013s 00:07:17.522 sys 0m0.015s 00:07:17.522 05:04:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.522 ************************************ 00:07:17.522 END TEST unittest_dma 00:07:17.522 05:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.522 ************************************ 00:07:17.522 05:04:36 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:07:17.522 05:04:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.522 05:04:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.522 05:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.522 ************************************ 00:07:17.522 START TEST unittest_init 00:07:17.522 ************************************ 00:07:17.522 05:04:36 -- common/autotest_common.sh@1104 -- # unittest_init 00:07:17.522 05:04:36 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:07:17.522 00:07:17.522 00:07:17.522 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.522 http://cunit.sourceforge.net/ 00:07:17.522 00:07:17.522 00:07:17.522 Suite: subsystem_suite 00:07:17.522 Test: subsystem_sort_test_depends_on_single ...passed 00:07:17.522 Test: subsystem_sort_test_depends_on_multiple ...passed 00:07:17.522 Test: subsystem_sort_test_missing_dependency ...[2024-07-26 05:04:36.476907] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:07:17.522 passed 00:07:17.522 00:07:17.522 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.522 suites 1 1 n/a 0 0 00:07:17.522 tests 3 3 3 0 0 00:07:17.522 asserts 20 20 20 0 n/a 00:07:17.522 00:07:17.522 Elapsed time = 0.000 seconds 00:07:17.522 [2024-07-26 05:04:36.477158] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:07:17.522 00:07:17.522 real 0m0.036s 00:07:17.522 user 0m0.020s 00:07:17.522 sys 0m0.016s 00:07:17.522 05:04:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.522 05:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.522 ************************************ 00:07:17.522 END TEST unittest_init 00:07:17.522 ************************************ 00:07:17.522 05:04:36 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:07:17.522 05:04:36 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:17.522 05:04:36 -- unit/unittest.sh@290 -- # hostname 00:07:17.522 05:04:36 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:17.781 geninfo: WARNING: invalid characters removed from testname! 00:07:56.494 05:05:09 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:07:56.494 05:05:14 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:58.396 05:05:17 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:00.929 05:05:19 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:03.463 05:05:22 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:06.767 05:05:25 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:09.301 05:05:28 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:12.595 05:05:31 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:12.595 05:05:31 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:12.854 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:12.854 Found 313 entries. 00:08:12.854 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:12.854 Writing .css and .png files. 00:08:12.854 Generating output. 00:08:12.854 Processing file include/linux/virtio_ring.h 00:08:13.112 Processing file include/spdk/histogram_data.h 00:08:13.112 Processing file include/spdk/trace.h 00:08:13.112 Processing file include/spdk/base64.h 00:08:13.112 Processing file include/spdk/bdev_module.h 00:08:13.112 Processing file include/spdk/mmio.h 00:08:13.112 Processing file include/spdk/nvmf_transport.h 00:08:13.112 Processing file include/spdk/endian.h 00:08:13.112 Processing file include/spdk/nvme.h 00:08:13.112 Processing file include/spdk/util.h 00:08:13.112 Processing file include/spdk/thread.h 00:08:13.112 Processing file include/spdk/nvme_spec.h 00:08:13.371 Processing file include/spdk_internal/nvme_tcp.h 00:08:13.371 Processing file include/spdk_internal/sgl.h 00:08:13.371 Processing file include/spdk_internal/sock.h 00:08:13.371 Processing file include/spdk_internal/rdma.h 00:08:13.371 Processing file include/spdk_internal/utf.h 00:08:13.371 Processing file include/spdk_internal/virtio.h 00:08:13.371 Processing file lib/accel/accel.c 00:08:13.371 Processing file lib/accel/accel_sw.c 00:08:13.371 Processing file lib/accel/accel_rpc.c 00:08:13.629 Processing file lib/bdev/bdev.c 00:08:13.629 Processing file lib/bdev/scsi_nvme.c 00:08:13.629 Processing file lib/bdev/part.c 00:08:13.629 Processing file lib/bdev/bdev_rpc.c 00:08:13.629 Processing file lib/bdev/bdev_zone.c 00:08:13.887 Processing file lib/blob/blob_bs_dev.c 00:08:13.887 Processing file lib/blob/blobstore.h 00:08:13.887 Processing file lib/blob/request.c 00:08:13.887 Processing file lib/blob/blobstore.c 00:08:13.887 Processing file lib/blob/zeroes.c 00:08:13.887 Processing file lib/blobfs/blobfs.c 00:08:13.887 Processing file lib/blobfs/tree.c 00:08:14.183 Processing file lib/conf/conf.c 00:08:14.183 Processing file lib/dma/dma.c 00:08:14.451 Processing file lib/env_dpdk/sigbus_handler.c 00:08:14.451 Processing file lib/env_dpdk/memory.c 00:08:14.451 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:14.451 Processing file lib/env_dpdk/init.c 00:08:14.451 Processing file lib/env_dpdk/pci_vmd.c 00:08:14.451 Processing file lib/env_dpdk/pci_virtio.c 00:08:14.451 Processing file lib/env_dpdk/env.c 00:08:14.451 Processing file lib/env_dpdk/pci.c 00:08:14.451 Processing file lib/env_dpdk/threads.c 00:08:14.451 Processing file lib/env_dpdk/pci_event.c 00:08:14.451 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:14.451 Processing file lib/env_dpdk/pci_dpdk.c 00:08:14.451 Processing file lib/env_dpdk/pci_idxd.c 00:08:14.451 Processing file lib/env_dpdk/pci_ioat.c 00:08:14.451 Processing file lib/event/scheduler_static.c 00:08:14.451 Processing file lib/event/log_rpc.c 00:08:14.451 Processing file lib/event/app_rpc.c 00:08:14.451 Processing file lib/event/app.c 00:08:14.451 Processing file lib/event/reactor.c 00:08:15.017 Processing file lib/ftl/ftl_debug.h 00:08:15.017 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:15.017 Processing file lib/ftl/ftl_l2p_flat.c 00:08:15.017 Processing file lib/ftl/ftl_layout.c 00:08:15.017 Processing file lib/ftl/ftl_debug.c 00:08:15.017 Processing file lib/ftl/ftl_band.c 00:08:15.017 Processing file lib/ftl/ftl_reloc.c 00:08:15.017 Processing file lib/ftl/ftl_sb.c 00:08:15.017 Processing file lib/ftl/ftl_core.h 00:08:15.017 Processing file lib/ftl/ftl_rq.c 00:08:15.017 Processing file lib/ftl/ftl_writer.c 00:08:15.017 Processing file lib/ftl/ftl_nv_cache.c 00:08:15.017 Processing file lib/ftl/ftl_trace.c 00:08:15.017 Processing file lib/ftl/ftl_io.h 00:08:15.017 Processing file lib/ftl/ftl_l2p_cache.c 00:08:15.017 Processing file lib/ftl/ftl_band.h 00:08:15.017 Processing file lib/ftl/ftl_band_ops.c 00:08:15.017 Processing file lib/ftl/ftl_io.c 00:08:15.017 Processing file lib/ftl/ftl_p2l.c 00:08:15.017 Processing file lib/ftl/ftl_l2p.c 00:08:15.017 Processing file lib/ftl/ftl_nv_cache.h 00:08:15.017 Processing file lib/ftl/ftl_init.c 00:08:15.017 Processing file lib/ftl/ftl_core.c 00:08:15.017 Processing file lib/ftl/ftl_writer.h 00:08:15.017 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:15.017 Processing file lib/ftl/base/ftl_base_dev.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:15.275 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:15.275 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:15.275 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:15.533 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:15.533 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:15.533 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:15.533 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:15.533 Processing file lib/ftl/utils/ftl_conf.c 00:08:15.533 Processing file lib/ftl/utils/ftl_md.c 00:08:15.533 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:15.533 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:15.533 Processing file lib/ftl/utils/ftl_property.c 00:08:15.533 Processing file lib/ftl/utils/ftl_df.h 00:08:15.533 Processing file lib/ftl/utils/ftl_property.h 00:08:15.533 Processing file lib/ftl/utils/ftl_mempool.c 00:08:15.533 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:15.791 Processing file lib/idxd/idxd_kernel.c 00:08:15.791 Processing file lib/idxd/idxd.c 00:08:15.791 Processing file lib/idxd/idxd_user.c 00:08:15.791 Processing file lib/idxd/idxd_internal.h 00:08:15.791 Processing file lib/init/subsystem_rpc.c 00:08:15.791 Processing file lib/init/rpc.c 00:08:15.791 Processing file lib/init/subsystem.c 00:08:15.791 Processing file lib/init/json_config.c 00:08:15.791 Processing file lib/ioat/ioat.c 00:08:15.791 Processing file lib/ioat/ioat_internal.h 00:08:16.358 Processing file lib/iscsi/iscsi_rpc.c 00:08:16.358 Processing file lib/iscsi/conn.c 00:08:16.358 Processing file lib/iscsi/tgt_node.c 00:08:16.358 Processing file lib/iscsi/param.c 00:08:16.358 Processing file lib/iscsi/md5.c 00:08:16.358 Processing file lib/iscsi/task.h 00:08:16.358 Processing file lib/iscsi/task.c 00:08:16.358 Processing file lib/iscsi/iscsi.c 00:08:16.358 Processing file lib/iscsi/init_grp.c 00:08:16.358 Processing file lib/iscsi/portal_grp.c 00:08:16.358 Processing file lib/iscsi/iscsi.h 00:08:16.358 Processing file lib/iscsi/iscsi_subsystem.c 00:08:16.358 Processing file lib/json/json_parse.c 00:08:16.358 Processing file lib/json/json_write.c 00:08:16.358 Processing file lib/json/json_util.c 00:08:16.358 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:16.358 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:16.358 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:16.358 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:16.616 Processing file lib/log/log_flags.c 00:08:16.616 Processing file lib/log/log.c 00:08:16.616 Processing file lib/log/log_deprecated.c 00:08:16.616 Processing file lib/lvol/lvol.c 00:08:16.616 Processing file lib/nbd/nbd.c 00:08:16.616 Processing file lib/nbd/nbd_rpc.c 00:08:16.874 Processing file lib/notify/notify.c 00:08:16.874 Processing file lib/notify/notify_rpc.c 00:08:17.440 Processing file lib/nvme/nvme_io_msg.c 00:08:17.440 Processing file lib/nvme/nvme_ctrlr.c 00:08:17.440 Processing file lib/nvme/nvme_ns.c 00:08:17.440 Processing file lib/nvme/nvme_cuse.c 00:08:17.440 Processing file lib/nvme/nvme_fabric.c 00:08:17.440 Processing file lib/nvme/nvme_ns_cmd.c 00:08:17.440 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:17.440 Processing file lib/nvme/nvme_pcie_common.c 00:08:17.440 Processing file lib/nvme/nvme_discovery.c 00:08:17.440 Processing file lib/nvme/nvme_internal.h 00:08:17.440 Processing file lib/nvme/nvme_pcie.c 00:08:17.440 Processing file lib/nvme/nvme_pcie_internal.h 00:08:17.440 Processing file lib/nvme/nvme_tcp.c 00:08:17.440 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:17.440 Processing file lib/nvme/nvme_transport.c 00:08:17.440 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:17.440 Processing file lib/nvme/nvme_vfio_user.c 00:08:17.440 Processing file lib/nvme/nvme_qpair.c 00:08:17.440 Processing file lib/nvme/nvme_quirks.c 00:08:17.440 Processing file lib/nvme/nvme_poll_group.c 00:08:17.440 Processing file lib/nvme/nvme_rdma.c 00:08:17.440 Processing file lib/nvme/nvme_zns.c 00:08:17.440 Processing file lib/nvme/nvme_opal.c 00:08:17.440 Processing file lib/nvme/nvme.c 00:08:18.007 Processing file lib/nvmf/ctrlr_discovery.c 00:08:18.007 Processing file lib/nvmf/nvmf_internal.h 00:08:18.007 Processing file lib/nvmf/transport.c 00:08:18.007 Processing file lib/nvmf/rdma.c 00:08:18.007 Processing file lib/nvmf/tcp.c 00:08:18.007 Processing file lib/nvmf/nvmf.c 00:08:18.007 Processing file lib/nvmf/nvmf_rpc.c 00:08:18.007 Processing file lib/nvmf/ctrlr.c 00:08:18.007 Processing file lib/nvmf/subsystem.c 00:08:18.007 Processing file lib/nvmf/ctrlr_bdev.c 00:08:18.007 Processing file lib/rdma/common.c 00:08:18.007 Processing file lib/rdma/rdma_verbs.c 00:08:18.265 Processing file lib/rpc/rpc.c 00:08:18.265 Processing file lib/scsi/scsi.c 00:08:18.265 Processing file lib/scsi/task.c 00:08:18.265 Processing file lib/scsi/scsi_pr.c 00:08:18.265 Processing file lib/scsi/port.c 00:08:18.265 Processing file lib/scsi/scsi_bdev.c 00:08:18.265 Processing file lib/scsi/lun.c 00:08:18.265 Processing file lib/scsi/scsi_rpc.c 00:08:18.265 Processing file lib/scsi/dev.c 00:08:18.524 Processing file lib/sock/sock.c 00:08:18.524 Processing file lib/sock/sock_rpc.c 00:08:18.524 Processing file lib/thread/thread.c 00:08:18.524 Processing file lib/thread/iobuf.c 00:08:18.524 Processing file lib/trace/trace_rpc.c 00:08:18.524 Processing file lib/trace/trace_flags.c 00:08:18.524 Processing file lib/trace/trace.c 00:08:18.783 Processing file lib/trace_parser/trace.cpp 00:08:18.783 Processing file lib/ublk/ublk.c 00:08:18.783 Processing file lib/ublk/ublk_rpc.c 00:08:18.783 Processing file lib/ut/ut.c 00:08:18.783 Processing file lib/ut_mock/mock.c 00:08:19.351 Processing file lib/util/base64.c 00:08:19.351 Processing file lib/util/uuid.c 00:08:19.351 Processing file lib/util/zipf.c 00:08:19.351 Processing file lib/util/bit_array.c 00:08:19.351 Processing file lib/util/crc64.c 00:08:19.351 Processing file lib/util/iov.c 00:08:19.351 Processing file lib/util/xor.c 00:08:19.351 Processing file lib/util/crc16.c 00:08:19.351 Processing file lib/util/cpuset.c 00:08:19.351 Processing file lib/util/file.c 00:08:19.351 Processing file lib/util/crc32.c 00:08:19.351 Processing file lib/util/crc32c.c 00:08:19.351 Processing file lib/util/fd_group.c 00:08:19.351 Processing file lib/util/crc32_ieee.c 00:08:19.351 Processing file lib/util/strerror_tls.c 00:08:19.351 Processing file lib/util/fd.c 00:08:19.351 Processing file lib/util/pipe.c 00:08:19.351 Processing file lib/util/hexlify.c 00:08:19.351 Processing file lib/util/dif.c 00:08:19.351 Processing file lib/util/string.c 00:08:19.351 Processing file lib/util/math.c 00:08:19.351 Processing file lib/vfio_user/host/vfio_user_pci.c 00:08:19.351 Processing file lib/vfio_user/host/vfio_user.c 00:08:19.609 Processing file lib/vhost/vhost_internal.h 00:08:19.609 Processing file lib/vhost/vhost_blk.c 00:08:19.609 Processing file lib/vhost/vhost.c 00:08:19.609 Processing file lib/vhost/vhost_scsi.c 00:08:19.609 Processing file lib/vhost/rte_vhost_user.c 00:08:19.609 Processing file lib/vhost/vhost_rpc.c 00:08:19.609 Processing file lib/virtio/virtio_pci.c 00:08:19.609 Processing file lib/virtio/virtio_vfio_user.c 00:08:19.609 Processing file lib/virtio/virtio.c 00:08:19.609 Processing file lib/virtio/virtio_vhost_user.c 00:08:19.609 Processing file lib/vmd/led.c 00:08:19.609 Processing file lib/vmd/vmd.c 00:08:19.868 Processing file module/accel/dsa/accel_dsa_rpc.c 00:08:19.868 Processing file module/accel/dsa/accel_dsa.c 00:08:19.868 Processing file module/accel/error/accel_error.c 00:08:19.868 Processing file module/accel/error/accel_error_rpc.c 00:08:19.868 Processing file module/accel/iaa/accel_iaa_rpc.c 00:08:19.868 Processing file module/accel/iaa/accel_iaa.c 00:08:19.868 Processing file module/accel/ioat/accel_ioat.c 00:08:19.868 Processing file module/accel/ioat/accel_ioat_rpc.c 00:08:20.126 Processing file module/bdev/aio/bdev_aio.c 00:08:20.126 Processing file module/bdev/aio/bdev_aio_rpc.c 00:08:20.126 Processing file module/bdev/delay/vbdev_delay.c 00:08:20.126 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:08:20.126 Processing file module/bdev/error/vbdev_error.c 00:08:20.126 Processing file module/bdev/error/vbdev_error_rpc.c 00:08:20.126 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:08:20.126 Processing file module/bdev/ftl/bdev_ftl.c 00:08:20.385 Processing file module/bdev/gpt/gpt.c 00:08:20.385 Processing file module/bdev/gpt/vbdev_gpt.c 00:08:20.385 Processing file module/bdev/gpt/gpt.h 00:08:20.385 Processing file module/bdev/iscsi/bdev_iscsi.c 00:08:20.385 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:08:20.385 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:08:20.385 Processing file module/bdev/lvol/vbdev_lvol.c 00:08:20.642 Processing file module/bdev/malloc/bdev_malloc.c 00:08:20.642 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:08:20.642 Processing file module/bdev/null/bdev_null_rpc.c 00:08:20.642 Processing file module/bdev/null/bdev_null.c 00:08:20.901 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:08:20.901 Processing file module/bdev/nvme/bdev_mdns_client.c 00:08:20.901 Processing file module/bdev/nvme/vbdev_opal.c 00:08:20.901 Processing file module/bdev/nvme/bdev_nvme.c 00:08:20.901 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:08:20.901 Processing file module/bdev/nvme/nvme_rpc.c 00:08:20.901 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:08:20.901 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:08:20.901 Processing file module/bdev/passthru/vbdev_passthru.c 00:08:21.172 Processing file module/bdev/raid/concat.c 00:08:21.172 Processing file module/bdev/raid/raid1.c 00:08:21.172 Processing file module/bdev/raid/bdev_raid_rpc.c 00:08:21.172 Processing file module/bdev/raid/bdev_raid_sb.c 00:08:21.172 Processing file module/bdev/raid/bdev_raid.h 00:08:21.172 Processing file module/bdev/raid/bdev_raid.c 00:08:21.172 Processing file module/bdev/raid/raid0.c 00:08:21.172 Processing file module/bdev/raid/raid5f.c 00:08:21.172 Processing file module/bdev/split/vbdev_split_rpc.c 00:08:21.172 Processing file module/bdev/split/vbdev_split.c 00:08:21.456 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:08:21.456 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:08:21.456 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:08:21.456 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:08:21.456 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:08:21.456 Processing file module/blob/bdev/blob_bdev.c 00:08:21.456 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:08:21.456 Processing file module/blobfs/bdev/blobfs_bdev.c 00:08:21.456 Processing file module/env_dpdk/env_dpdk_rpc.c 00:08:21.456 Processing file module/event/subsystems/accel/accel.c 00:08:21.715 Processing file module/event/subsystems/bdev/bdev.c 00:08:21.715 Processing file module/event/subsystems/iobuf/iobuf.c 00:08:21.715 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:08:21.715 Processing file module/event/subsystems/iscsi/iscsi.c 00:08:21.715 Processing file module/event/subsystems/nbd/nbd.c 00:08:21.974 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:08:21.974 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:08:21.974 Processing file module/event/subsystems/scheduler/scheduler.c 00:08:21.974 Processing file module/event/subsystems/scsi/scsi.c 00:08:21.974 Processing file module/event/subsystems/sock/sock.c 00:08:21.974 Processing file module/event/subsystems/ublk/ublk.c 00:08:21.974 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:08:22.233 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:08:22.233 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:08:22.233 Processing file module/event/subsystems/vmd/vmd.c 00:08:22.233 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:08:22.233 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:08:22.233 Processing file module/scheduler/gscheduler/gscheduler.c 00:08:22.233 Processing file module/sock/sock_kernel.h 00:08:22.492 Processing file module/sock/posix/posix.c 00:08:22.492 Writing directory view page. 00:08:22.492 Overall coverage rate: 00:08:22.492 lines......: 38.6% (39266 of 101727 lines) 00:08:22.492 functions..: 42.2% (3587 of 8494 functions) 00:08:22.492 00:08:22.492 00:08:22.492 05:05:41 -- unit/unittest.sh@302 -- # set +x 00:08:22.492 ===================== 00:08:22.492 All unit tests passed 00:08:22.492 ===================== 00:08:22.492 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:22.492 00:08:22.492 00:08:22.492 00:08:22.492 real 3m9.220s 00:08:22.492 user 2m44.722s 00:08:22.492 sys 0m15.385s 00:08:22.492 05:05:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.492 ************************************ 00:08:22.492 END TEST unittest 00:08:22.492 ************************************ 00:08:22.492 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:08:22.492 05:05:41 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:08:22.492 05:05:41 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:22.492 05:05:41 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:22.492 05:05:41 -- spdk/autotest.sh@173 -- # timing_enter lib 00:08:22.492 05:05:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:22.492 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:08:22.492 05:05:41 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:22.492 05:05:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:22.492 05:05:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.492 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:08:22.492 ************************************ 00:08:22.492 START TEST env 00:08:22.492 ************************************ 00:08:22.492 05:05:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:22.492 * Looking for test storage... 00:08:22.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:22.492 05:05:41 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:22.492 05:05:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:22.493 05:05:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.493 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:08:22.493 ************************************ 00:08:22.493 START TEST env_memory 00:08:22.493 ************************************ 00:08:22.493 05:05:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:22.493 00:08:22.493 00:08:22.493 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.493 http://cunit.sourceforge.net/ 00:08:22.493 00:08:22.493 00:08:22.493 Suite: memory 00:08:22.752 Test: alloc and free memory map ...[2024-07-26 05:05:41.622943] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:22.752 passed 00:08:22.752 Test: mem map translation ...[2024-07-26 05:05:41.685762] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:22.752 [2024-07-26 05:05:41.685859] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:22.752 [2024-07-26 05:05:41.685990] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:22.752 [2024-07-26 05:05:41.686061] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:22.752 passed 00:08:22.752 Test: mem map registration ...[2024-07-26 05:05:41.787472] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:22.752 [2024-07-26 05:05:41.787586] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:22.752 passed 00:08:23.011 Test: mem map adjacent registrations ...passed 00:08:23.011 00:08:23.011 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.011 suites 1 1 n/a 0 0 00:08:23.011 tests 4 4 4 0 0 00:08:23.011 asserts 152 152 152 0 n/a 00:08:23.011 00:08:23.011 Elapsed time = 0.318 seconds 00:08:23.011 00:08:23.011 real 0m0.347s 00:08:23.011 user 0m0.321s 00:08:23.011 sys 0m0.027s 00:08:23.011 05:05:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.011 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:08:23.011 ************************************ 00:08:23.011 END TEST env_memory 00:08:23.011 ************************************ 00:08:23.011 05:05:41 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:23.011 05:05:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:23.011 05:05:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.011 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:08:23.011 ************************************ 00:08:23.011 START TEST env_vtophys 00:08:23.011 ************************************ 00:08:23.011 05:05:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:23.011 EAL: lib.eal log level changed from notice to debug 00:08:23.011 EAL: Detected lcore 0 as core 0 on socket 0 00:08:23.011 EAL: Detected lcore 1 as core 0 on socket 0 00:08:23.011 EAL: Detected lcore 2 as core 0 on socket 0 00:08:23.011 EAL: Detected lcore 3 as core 0 on socket 0 00:08:23.011 EAL: Detected lcore 4 as core 0 on socket 0 00:08:23.011 EAL: Detected lcore 5 as core 0 on socket 0 00:08:23.011 EAL: Detected lcore 6 as core 0 on socket 0 00:08:23.011 EAL: Detected lcore 7 as core 0 on socket 0 00:08:23.011 EAL: Detected lcore 8 as core 0 on socket 0 00:08:23.011 EAL: Detected lcore 9 as core 0 on socket 0 00:08:23.011 EAL: Maximum logical cores by configuration: 128 00:08:23.011 EAL: Detected CPU lcores: 10 00:08:23.011 EAL: Detected NUMA nodes: 1 00:08:23.011 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:23.011 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:23.011 EAL: Checking presence of .so 'librte_eal.so' 00:08:23.011 EAL: Detected static linkage of DPDK 00:08:23.011 EAL: No shared files mode enabled, IPC will be disabled 00:08:23.011 EAL: Selected IOVA mode 'PA' 00:08:23.011 EAL: Probing VFIO support... 00:08:23.011 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:23.011 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:23.011 EAL: Ask a virtual area of 0x2e000 bytes 00:08:23.011 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:23.011 EAL: Setting up physically contiguous memory... 00:08:23.011 EAL: Setting maximum number of open files to 1048576 00:08:23.011 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:23.011 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:23.011 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.011 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:23.011 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:23.011 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.011 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:23.011 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:23.011 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.011 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:23.011 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:23.011 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.011 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:23.011 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:23.011 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.011 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:23.011 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:23.011 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.011 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:23.011 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:23.011 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.011 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:23.011 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:23.011 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.011 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:23.011 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:23.011 EAL: Hugepages will be freed exactly as allocated. 00:08:23.011 EAL: No shared files mode enabled, IPC is disabled 00:08:23.011 EAL: No shared files mode enabled, IPC is disabled 00:08:23.271 EAL: TSC frequency is ~2200000 KHz 00:08:23.271 EAL: Main lcore 0 is ready (tid=7cddcf0d1a80;cpuset=[0]) 00:08:23.271 EAL: Trying to obtain current memory policy. 00:08:23.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.271 EAL: Restoring previous memory policy: 0 00:08:23.271 EAL: request: mp_malloc_sync 00:08:23.271 EAL: No shared files mode enabled, IPC is disabled 00:08:23.271 EAL: Heap on socket 0 was expanded by 2MB 00:08:23.271 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:23.271 EAL: Mem event callback 'spdk:(nil)' registered 00:08:23.271 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:23.271 00:08:23.271 00:08:23.271 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.271 http://cunit.sourceforge.net/ 00:08:23.271 00:08:23.271 00:08:23.271 Suite: components_suite 00:08:23.271 Test: vtophys_malloc_test ...passed 00:08:23.271 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:23.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.271 EAL: Restoring previous memory policy: 4 00:08:23.271 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.271 EAL: request: mp_malloc_sync 00:08:23.271 EAL: No shared files mode enabled, IPC is disabled 00:08:23.271 EAL: Heap on socket 0 was expanded by 4MB 00:08:23.271 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.271 EAL: request: mp_malloc_sync 00:08:23.271 EAL: No shared files mode enabled, IPC is disabled 00:08:23.271 EAL: Heap on socket 0 was shrunk by 4MB 00:08:23.271 EAL: Trying to obtain current memory policy. 00:08:23.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.271 EAL: Restoring previous memory policy: 4 00:08:23.271 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.271 EAL: request: mp_malloc_sync 00:08:23.271 EAL: No shared files mode enabled, IPC is disabled 00:08:23.271 EAL: Heap on socket 0 was expanded by 6MB 00:08:23.271 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.271 EAL: request: mp_malloc_sync 00:08:23.271 EAL: No shared files mode enabled, IPC is disabled 00:08:23.271 EAL: Heap on socket 0 was shrunk by 6MB 00:08:23.271 EAL: Trying to obtain current memory policy. 00:08:23.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.271 EAL: Restoring previous memory policy: 4 00:08:23.271 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.271 EAL: request: mp_malloc_sync 00:08:23.271 EAL: No shared files mode enabled, IPC is disabled 00:08:23.271 EAL: Heap on socket 0 was expanded by 10MB 00:08:23.271 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.271 EAL: request: mp_malloc_sync 00:08:23.271 EAL: No shared files mode enabled, IPC is disabled 00:08:23.271 EAL: Heap on socket 0 was shrunk by 10MB 00:08:23.271 EAL: Trying to obtain current memory policy. 00:08:23.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.271 EAL: Restoring previous memory policy: 4 00:08:23.271 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.271 EAL: request: mp_malloc_sync 00:08:23.271 EAL: No shared files mode enabled, IPC is disabled 00:08:23.271 EAL: Heap on socket 0 was expanded by 18MB 00:08:23.271 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.271 EAL: request: mp_malloc_sync 00:08:23.271 EAL: No shared files mode enabled, IPC is disabled 00:08:23.271 EAL: Heap on socket 0 was shrunk by 18MB 00:08:23.271 EAL: Trying to obtain current memory policy. 00:08:23.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.271 EAL: Restoring previous memory policy: 4 00:08:23.271 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.271 EAL: request: mp_malloc_sync 00:08:23.271 EAL: No shared files mode enabled, IPC is disabled 00:08:23.271 EAL: Heap on socket 0 was expanded by 34MB 00:08:23.530 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.530 EAL: request: mp_malloc_sync 00:08:23.530 EAL: No shared files mode enabled, IPC is disabled 00:08:23.530 EAL: Heap on socket 0 was shrunk by 34MB 00:08:23.530 EAL: Trying to obtain current memory policy. 00:08:23.530 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.530 EAL: Restoring previous memory policy: 4 00:08:23.530 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.530 EAL: request: mp_malloc_sync 00:08:23.530 EAL: No shared files mode enabled, IPC is disabled 00:08:23.530 EAL: Heap on socket 0 was expanded by 66MB 00:08:23.530 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.530 EAL: request: mp_malloc_sync 00:08:23.530 EAL: No shared files mode enabled, IPC is disabled 00:08:23.530 EAL: Heap on socket 0 was shrunk by 66MB 00:08:23.788 EAL: Trying to obtain current memory policy. 00:08:23.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.788 EAL: Restoring previous memory policy: 4 00:08:23.788 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.788 EAL: request: mp_malloc_sync 00:08:23.788 EAL: No shared files mode enabled, IPC is disabled 00:08:23.788 EAL: Heap on socket 0 was expanded by 130MB 00:08:23.788 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.788 EAL: request: mp_malloc_sync 00:08:23.788 EAL: No shared files mode enabled, IPC is disabled 00:08:23.788 EAL: Heap on socket 0 was shrunk by 130MB 00:08:24.047 EAL: Trying to obtain current memory policy. 00:08:24.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:24.047 EAL: Restoring previous memory policy: 4 00:08:24.047 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.047 EAL: request: mp_malloc_sync 00:08:24.047 EAL: No shared files mode enabled, IPC is disabled 00:08:24.047 EAL: Heap on socket 0 was expanded by 258MB 00:08:24.305 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.562 EAL: request: mp_malloc_sync 00:08:24.562 EAL: No shared files mode enabled, IPC is disabled 00:08:24.562 EAL: Heap on socket 0 was shrunk by 258MB 00:08:24.819 EAL: Trying to obtain current memory policy. 00:08:24.819 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:24.819 EAL: Restoring previous memory policy: 4 00:08:24.819 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.819 EAL: request: mp_malloc_sync 00:08:24.819 EAL: No shared files mode enabled, IPC is disabled 00:08:24.819 EAL: Heap on socket 0 was expanded by 514MB 00:08:25.755 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.755 EAL: request: mp_malloc_sync 00:08:25.755 EAL: No shared files mode enabled, IPC is disabled 00:08:25.755 EAL: Heap on socket 0 was shrunk by 514MB 00:08:26.322 EAL: Trying to obtain current memory policy. 00:08:26.322 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:26.322 EAL: Restoring previous memory policy: 4 00:08:26.322 EAL: Calling mem event callback 'spdk:(nil)' 00:08:26.322 EAL: request: mp_malloc_sync 00:08:26.322 EAL: No shared files mode enabled, IPC is disabled 00:08:26.322 EAL: Heap on socket 0 was expanded by 1026MB 00:08:27.698 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.956 EAL: request: mp_malloc_sync 00:08:27.956 EAL: No shared files mode enabled, IPC is disabled 00:08:27.956 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:29.333 passed 00:08:29.333 00:08:29.333 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.333 suites 1 1 n/a 0 0 00:08:29.333 tests 2 2 2 0 0 00:08:29.333 asserts 5460 5460 5460 0 n/a 00:08:29.333 00:08:29.333 Elapsed time = 5.911 seconds 00:08:29.333 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.333 EAL: request: mp_malloc_sync 00:08:29.333 EAL: No shared files mode enabled, IPC is disabled 00:08:29.333 EAL: Heap on socket 0 was shrunk by 2MB 00:08:29.333 EAL: No shared files mode enabled, IPC is disabled 00:08:29.333 EAL: No shared files mode enabled, IPC is disabled 00:08:29.333 EAL: No shared files mode enabled, IPC is disabled 00:08:29.333 00:08:29.333 real 0m6.184s 00:08:29.333 user 0m5.392s 00:08:29.333 sys 0m0.666s 00:08:29.333 05:05:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.333 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.333 ************************************ 00:08:29.333 END TEST env_vtophys 00:08:29.333 ************************************ 00:08:29.333 05:05:48 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:29.333 05:05:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:29.333 05:05:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.333 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.333 ************************************ 00:08:29.333 START TEST env_pci 00:08:29.333 ************************************ 00:08:29.333 05:05:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:29.333 00:08:29.333 00:08:29.333 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.333 http://cunit.sourceforge.net/ 00:08:29.333 00:08:29.333 00:08:29.333 Suite: pci 00:08:29.333 Test: pci_hook ...[2024-07-26 05:05:48.227481] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60447 has claimed it 00:08:29.333 passed 00:08:29.333 00:08:29.333 EAL: Cannot find device (10000:00:01.0) 00:08:29.333 EAL: Failed to attach device on primary process 00:08:29.333 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.333 suites 1 1 n/a 0 0 00:08:29.333 tests 1 1 1 0 0 00:08:29.333 asserts 25 25 25 0 n/a 00:08:29.333 00:08:29.333 Elapsed time = 0.007 seconds 00:08:29.333 00:08:29.333 real 0m0.082s 00:08:29.333 user 0m0.054s 00:08:29.333 sys 0m0.029s 00:08:29.333 05:05:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.333 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.333 ************************************ 00:08:29.333 END TEST env_pci 00:08:29.333 ************************************ 00:08:29.333 05:05:48 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:29.333 05:05:48 -- env/env.sh@15 -- # uname 00:08:29.333 05:05:48 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:29.333 05:05:48 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:29.333 05:05:48 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:29.333 05:05:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:29.333 05:05:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.333 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.333 ************************************ 00:08:29.333 START TEST env_dpdk_post_init 00:08:29.333 ************************************ 00:08:29.333 05:05:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:29.333 EAL: Detected CPU lcores: 10 00:08:29.333 EAL: Detected NUMA nodes: 1 00:08:29.333 EAL: Detected static linkage of DPDK 00:08:29.333 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:29.333 EAL: Selected IOVA mode 'PA' 00:08:29.592 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:29.592 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:08:29.592 Starting DPDK initialization... 00:08:29.592 Starting SPDK post initialization... 00:08:29.592 SPDK NVMe probe 00:08:29.592 Attaching to 0000:00:06.0 00:08:29.592 Attached to 0000:00:06.0 00:08:29.592 Cleaning up... 00:08:29.592 00:08:29.592 real 0m0.254s 00:08:29.592 user 0m0.075s 00:08:29.592 sys 0m0.080s 00:08:29.592 05:05:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.592 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.592 ************************************ 00:08:29.592 END TEST env_dpdk_post_init 00:08:29.592 ************************************ 00:08:29.592 05:05:48 -- env/env.sh@26 -- # uname 00:08:29.592 05:05:48 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:29.592 05:05:48 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:29.592 05:05:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:29.592 05:05:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.592 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.592 ************************************ 00:08:29.592 START TEST env_mem_callbacks 00:08:29.592 ************************************ 00:08:29.592 05:05:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:29.592 EAL: Detected CPU lcores: 10 00:08:29.592 EAL: Detected NUMA nodes: 1 00:08:29.592 EAL: Detected static linkage of DPDK 00:08:29.851 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:29.851 EAL: Selected IOVA mode 'PA' 00:08:29.851 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:29.851 00:08:29.851 00:08:29.851 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.851 http://cunit.sourceforge.net/ 00:08:29.851 00:08:29.851 00:08:29.851 Suite: memory 00:08:29.851 Test: test ... 00:08:29.851 register 0x200000200000 2097152 00:08:29.851 malloc 3145728 00:08:29.851 register 0x200000400000 4194304 00:08:29.851 buf 0x2000004fffc0 len 3145728 PASSED 00:08:29.851 malloc 64 00:08:29.851 buf 0x2000004ffec0 len 64 PASSED 00:08:29.851 malloc 4194304 00:08:29.851 register 0x200000800000 6291456 00:08:29.851 buf 0x2000009fffc0 len 4194304 PASSED 00:08:29.851 free 0x2000004fffc0 3145728 00:08:29.851 free 0x2000004ffec0 64 00:08:29.851 unregister 0x200000400000 4194304 PASSED 00:08:29.851 free 0x2000009fffc0 4194304 00:08:29.851 unregister 0x200000800000 6291456 PASSED 00:08:29.851 malloc 8388608 00:08:29.851 register 0x200000400000 10485760 00:08:29.851 buf 0x2000005fffc0 len 8388608 PASSED 00:08:29.851 free 0x2000005fffc0 8388608 00:08:29.851 unregister 0x200000400000 10485760 PASSED 00:08:29.851 passed 00:08:29.851 00:08:29.851 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.851 suites 1 1 n/a 0 0 00:08:29.851 tests 1 1 1 0 0 00:08:29.851 asserts 15 15 15 0 n/a 00:08:29.851 00:08:29.851 Elapsed time = 0.058 seconds 00:08:29.851 00:08:29.851 real 0m0.258s 00:08:29.851 user 0m0.090s 00:08:29.851 sys 0m0.068s 00:08:29.851 05:05:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.851 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.851 ************************************ 00:08:29.851 END TEST env_mem_callbacks 00:08:29.851 ************************************ 00:08:29.851 00:08:29.851 real 0m7.471s 00:08:29.851 user 0m6.040s 00:08:29.851 sys 0m1.097s 00:08:29.851 05:05:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.851 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.851 ************************************ 00:08:29.851 END TEST env 00:08:29.851 ************************************ 00:08:30.111 05:05:48 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:30.111 05:05:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:30.111 05:05:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.111 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:08:30.111 ************************************ 00:08:30.111 START TEST rpc 00:08:30.111 ************************************ 00:08:30.111 05:05:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:30.111 * Looking for test storage... 00:08:30.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:30.111 05:05:49 -- rpc/rpc.sh@65 -- # spdk_pid=60565 00:08:30.111 05:05:49 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:30.111 05:05:49 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:30.111 05:05:49 -- rpc/rpc.sh@67 -- # waitforlisten 60565 00:08:30.111 05:05:49 -- common/autotest_common.sh@819 -- # '[' -z 60565 ']' 00:08:30.111 05:05:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.111 05:05:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:30.111 05:05:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.111 05:05:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:30.111 05:05:49 -- common/autotest_common.sh@10 -- # set +x 00:08:30.111 [2024-07-26 05:05:49.161173] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:30.111 [2024-07-26 05:05:49.161363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60565 ] 00:08:30.372 [2024-07-26 05:05:49.328688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.631 [2024-07-26 05:05:49.489728] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.631 [2024-07-26 05:05:49.490022] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:30.631 [2024-07-26 05:05:49.490048] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60565' to capture a snapshot of events at runtime. 00:08:30.631 [2024-07-26 05:05:49.490061] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60565 for offline analysis/debug. 00:08:30.631 [2024-07-26 05:05:49.490101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.007 05:05:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:32.007 05:05:50 -- common/autotest_common.sh@852 -- # return 0 00:08:32.008 05:05:50 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:32.008 05:05:50 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:32.008 05:05:50 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:32.008 05:05:50 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:32.008 05:05:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.008 05:05:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.008 05:05:50 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 ************************************ 00:08:32.008 START TEST rpc_integrity 00:08:32.008 ************************************ 00:08:32.008 05:05:50 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:32.008 05:05:50 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:32.008 05:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.008 05:05:50 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 05:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.008 05:05:50 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:32.008 05:05:50 -- rpc/rpc.sh@13 -- # jq length 00:08:32.008 05:05:50 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:32.008 05:05:50 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:32.008 05:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.008 05:05:50 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 05:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.008 05:05:50 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:32.008 05:05:50 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:32.008 05:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.008 05:05:50 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 05:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.008 05:05:50 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:32.008 { 00:08:32.008 "name": "Malloc0", 00:08:32.008 "aliases": [ 00:08:32.008 "0ff3270d-a64b-4696-87c2-1d9a77063ac9" 00:08:32.008 ], 00:08:32.008 "product_name": "Malloc disk", 00:08:32.008 "block_size": 512, 00:08:32.008 "num_blocks": 16384, 00:08:32.008 "uuid": "0ff3270d-a64b-4696-87c2-1d9a77063ac9", 00:08:32.008 "assigned_rate_limits": { 00:08:32.008 "rw_ios_per_sec": 0, 00:08:32.008 "rw_mbytes_per_sec": 0, 00:08:32.008 "r_mbytes_per_sec": 0, 00:08:32.008 "w_mbytes_per_sec": 0 00:08:32.008 }, 00:08:32.008 "claimed": false, 00:08:32.008 "zoned": false, 00:08:32.008 "supported_io_types": { 00:08:32.008 "read": true, 00:08:32.008 "write": true, 00:08:32.008 "unmap": true, 00:08:32.008 "write_zeroes": true, 00:08:32.008 "flush": true, 00:08:32.008 "reset": true, 00:08:32.008 "compare": false, 00:08:32.008 "compare_and_write": false, 00:08:32.008 "abort": true, 00:08:32.008 "nvme_admin": false, 00:08:32.008 "nvme_io": false 00:08:32.008 }, 00:08:32.008 "memory_domains": [ 00:08:32.008 { 00:08:32.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.008 "dma_device_type": 2 00:08:32.008 } 00:08:32.008 ], 00:08:32.008 "driver_specific": {} 00:08:32.008 } 00:08:32.008 ]' 00:08:32.008 05:05:50 -- rpc/rpc.sh@17 -- # jq length 00:08:32.008 05:05:50 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:32.008 05:05:50 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:32.008 05:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.008 05:05:50 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 [2024-07-26 05:05:50.933943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:32.008 [2024-07-26 05:05:50.934063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.008 [2024-07-26 05:05:50.934097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:08:32.008 [2024-07-26 05:05:50.934122] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.008 [2024-07-26 05:05:50.936577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.008 [2024-07-26 05:05:50.936637] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:32.008 Passthru0 00:08:32.008 05:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.008 05:05:50 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:32.008 05:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.008 05:05:50 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 05:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.008 05:05:50 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:32.008 { 00:08:32.008 "name": "Malloc0", 00:08:32.008 "aliases": [ 00:08:32.008 "0ff3270d-a64b-4696-87c2-1d9a77063ac9" 00:08:32.008 ], 00:08:32.008 "product_name": "Malloc disk", 00:08:32.008 "block_size": 512, 00:08:32.008 "num_blocks": 16384, 00:08:32.008 "uuid": "0ff3270d-a64b-4696-87c2-1d9a77063ac9", 00:08:32.008 "assigned_rate_limits": { 00:08:32.008 "rw_ios_per_sec": 0, 00:08:32.008 "rw_mbytes_per_sec": 0, 00:08:32.008 "r_mbytes_per_sec": 0, 00:08:32.008 "w_mbytes_per_sec": 0 00:08:32.008 }, 00:08:32.008 "claimed": true, 00:08:32.008 "claim_type": "exclusive_write", 00:08:32.008 "zoned": false, 00:08:32.008 "supported_io_types": { 00:08:32.008 "read": true, 00:08:32.008 "write": true, 00:08:32.008 "unmap": true, 00:08:32.008 "write_zeroes": true, 00:08:32.008 "flush": true, 00:08:32.008 "reset": true, 00:08:32.008 "compare": false, 00:08:32.008 "compare_and_write": false, 00:08:32.008 "abort": true, 00:08:32.008 "nvme_admin": false, 00:08:32.008 "nvme_io": false 00:08:32.008 }, 00:08:32.008 "memory_domains": [ 00:08:32.008 { 00:08:32.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.008 "dma_device_type": 2 00:08:32.008 } 00:08:32.008 ], 00:08:32.008 "driver_specific": {} 00:08:32.008 }, 00:08:32.008 { 00:08:32.008 "name": "Passthru0", 00:08:32.008 "aliases": [ 00:08:32.008 "59d60d99-feb8-5522-8d32-63dfe7f7570c" 00:08:32.008 ], 00:08:32.008 "product_name": "passthru", 00:08:32.008 "block_size": 512, 00:08:32.008 "num_blocks": 16384, 00:08:32.008 "uuid": "59d60d99-feb8-5522-8d32-63dfe7f7570c", 00:08:32.008 "assigned_rate_limits": { 00:08:32.008 "rw_ios_per_sec": 0, 00:08:32.008 "rw_mbytes_per_sec": 0, 00:08:32.008 "r_mbytes_per_sec": 0, 00:08:32.008 "w_mbytes_per_sec": 0 00:08:32.008 }, 00:08:32.008 "claimed": false, 00:08:32.008 "zoned": false, 00:08:32.008 "supported_io_types": { 00:08:32.008 "read": true, 00:08:32.008 "write": true, 00:08:32.008 "unmap": true, 00:08:32.008 "write_zeroes": true, 00:08:32.008 "flush": true, 00:08:32.008 "reset": true, 00:08:32.008 "compare": false, 00:08:32.008 "compare_and_write": false, 00:08:32.008 "abort": true, 00:08:32.008 "nvme_admin": false, 00:08:32.008 "nvme_io": false 00:08:32.008 }, 00:08:32.008 "memory_domains": [ 00:08:32.008 { 00:08:32.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.008 "dma_device_type": 2 00:08:32.008 } 00:08:32.008 ], 00:08:32.008 "driver_specific": { 00:08:32.008 "passthru": { 00:08:32.008 "name": "Passthru0", 00:08:32.008 "base_bdev_name": "Malloc0" 00:08:32.008 } 00:08:32.008 } 00:08:32.008 } 00:08:32.008 ]' 00:08:32.008 05:05:50 -- rpc/rpc.sh@21 -- # jq length 00:08:32.008 05:05:50 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:32.008 05:05:50 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:32.008 05:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.008 05:05:50 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 05:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.008 05:05:50 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:32.008 05:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.008 05:05:50 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.008 05:05:51 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:32.008 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.008 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.008 05:05:51 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:32.008 05:05:51 -- rpc/rpc.sh@26 -- # jq length 00:08:32.008 05:05:51 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:32.008 00:08:32.008 real 0m0.169s 00:08:32.008 user 0m0.051s 00:08:32.008 sys 0m0.033s 00:08:32.008 05:05:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.008 ************************************ 00:08:32.008 END TEST rpc_integrity 00:08:32.008 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 ************************************ 00:08:32.008 05:05:51 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:32.008 05:05:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.008 05:05:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.008 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 ************************************ 00:08:32.008 START TEST rpc_plugins 00:08:32.008 ************************************ 00:08:32.008 05:05:51 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:08:32.008 05:05:51 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:32.008 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.008 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.008 05:05:51 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:32.008 05:05:51 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:32.008 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.009 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.268 05:05:51 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:32.268 { 00:08:32.268 "name": "Malloc1", 00:08:32.268 "aliases": [ 00:08:32.268 "cba9b254-7e4a-4e91-9ec0-7eec7b91b57b" 00:08:32.268 ], 00:08:32.268 "product_name": "Malloc disk", 00:08:32.268 "block_size": 4096, 00:08:32.268 "num_blocks": 256, 00:08:32.268 "uuid": "cba9b254-7e4a-4e91-9ec0-7eec7b91b57b", 00:08:32.268 "assigned_rate_limits": { 00:08:32.268 "rw_ios_per_sec": 0, 00:08:32.268 "rw_mbytes_per_sec": 0, 00:08:32.268 "r_mbytes_per_sec": 0, 00:08:32.268 "w_mbytes_per_sec": 0 00:08:32.268 }, 00:08:32.268 "claimed": false, 00:08:32.268 "zoned": false, 00:08:32.268 "supported_io_types": { 00:08:32.268 "read": true, 00:08:32.268 "write": true, 00:08:32.268 "unmap": true, 00:08:32.268 "write_zeroes": true, 00:08:32.268 "flush": true, 00:08:32.268 "reset": true, 00:08:32.268 "compare": false, 00:08:32.268 "compare_and_write": false, 00:08:32.268 "abort": true, 00:08:32.268 "nvme_admin": false, 00:08:32.268 "nvme_io": false 00:08:32.268 }, 00:08:32.268 "memory_domains": [ 00:08:32.268 { 00:08:32.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.268 "dma_device_type": 2 00:08:32.268 } 00:08:32.268 ], 00:08:32.268 "driver_specific": {} 00:08:32.268 } 00:08:32.268 ]' 00:08:32.268 05:05:51 -- rpc/rpc.sh@32 -- # jq length 00:08:32.268 05:05:51 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:32.268 05:05:51 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:32.268 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.268 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.268 05:05:51 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:32.268 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.268 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.268 05:05:51 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:32.268 05:05:51 -- rpc/rpc.sh@36 -- # jq length 00:08:32.268 05:05:51 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:32.268 00:08:32.268 real 0m0.077s 00:08:32.268 user 0m0.019s 00:08:32.268 sys 0m0.021s 00:08:32.268 05:05:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.268 ************************************ 00:08:32.268 END TEST rpc_plugins 00:08:32.268 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 ************************************ 00:08:32.268 05:05:51 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:32.268 05:05:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.268 05:05:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.268 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 ************************************ 00:08:32.268 START TEST rpc_trace_cmd_test 00:08:32.268 ************************************ 00:08:32.268 05:05:51 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:08:32.268 05:05:51 -- rpc/rpc.sh@40 -- # local info 00:08:32.268 05:05:51 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:32.268 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.268 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.268 05:05:51 -- rpc/rpc.sh@42 -- # info='{ 00:08:32.268 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60565", 00:08:32.268 "tpoint_group_mask": "0x8", 00:08:32.268 "iscsi_conn": { 00:08:32.268 "mask": "0x2", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 }, 00:08:32.268 "scsi": { 00:08:32.268 "mask": "0x4", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 }, 00:08:32.268 "bdev": { 00:08:32.268 "mask": "0x8", 00:08:32.268 "tpoint_mask": "0xffffffffffffffff" 00:08:32.268 }, 00:08:32.268 "nvmf_rdma": { 00:08:32.268 "mask": "0x10", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 }, 00:08:32.268 "nvmf_tcp": { 00:08:32.268 "mask": "0x20", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 }, 00:08:32.268 "ftl": { 00:08:32.268 "mask": "0x40", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 }, 00:08:32.268 "blobfs": { 00:08:32.268 "mask": "0x80", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 }, 00:08:32.268 "dsa": { 00:08:32.268 "mask": "0x200", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 }, 00:08:32.268 "thread": { 00:08:32.268 "mask": "0x400", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 }, 00:08:32.268 "nvme_pcie": { 00:08:32.268 "mask": "0x800", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 }, 00:08:32.268 "iaa": { 00:08:32.268 "mask": "0x1000", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 }, 00:08:32.268 "nvme_tcp": { 00:08:32.268 "mask": "0x2000", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 }, 00:08:32.268 "bdev_nvme": { 00:08:32.268 "mask": "0x4000", 00:08:32.268 "tpoint_mask": "0x0" 00:08:32.268 } 00:08:32.268 }' 00:08:32.268 05:05:51 -- rpc/rpc.sh@43 -- # jq length 00:08:32.268 05:05:51 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:32.268 05:05:51 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:32.268 05:05:51 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:32.268 05:05:51 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:32.268 05:05:51 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:32.268 05:05:51 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:32.268 05:05:51 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:32.268 05:05:51 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:32.268 05:05:51 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:32.268 00:08:32.268 real 0m0.070s 00:08:32.268 user 0m0.037s 00:08:32.268 sys 0m0.026s 00:08:32.268 05:05:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.268 ************************************ 00:08:32.268 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 END TEST rpc_trace_cmd_test 00:08:32.268 ************************************ 00:08:32.268 05:05:51 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:32.268 05:05:51 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:32.268 05:05:51 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:32.268 05:05:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.268 05:05:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.268 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 ************************************ 00:08:32.268 START TEST rpc_daemon_integrity 00:08:32.268 ************************************ 00:08:32.268 05:05:51 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:32.268 05:05:51 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:32.268 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.268 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.268 05:05:51 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:32.268 05:05:51 -- rpc/rpc.sh@13 -- # jq length 00:08:32.268 05:05:51 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:32.268 05:05:51 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:32.268 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.268 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.268 05:05:51 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:32.268 05:05:51 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:32.268 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.268 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.527 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.527 05:05:51 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:32.527 { 00:08:32.527 "name": "Malloc2", 00:08:32.527 "aliases": [ 00:08:32.527 "ad94c6c2-23ea-4054-a88c-b6c4a19b0be0" 00:08:32.527 ], 00:08:32.527 "product_name": "Malloc disk", 00:08:32.527 "block_size": 512, 00:08:32.527 "num_blocks": 16384, 00:08:32.527 "uuid": "ad94c6c2-23ea-4054-a88c-b6c4a19b0be0", 00:08:32.527 "assigned_rate_limits": { 00:08:32.527 "rw_ios_per_sec": 0, 00:08:32.527 "rw_mbytes_per_sec": 0, 00:08:32.527 "r_mbytes_per_sec": 0, 00:08:32.527 "w_mbytes_per_sec": 0 00:08:32.527 }, 00:08:32.527 "claimed": false, 00:08:32.527 "zoned": false, 00:08:32.527 "supported_io_types": { 00:08:32.527 "read": true, 00:08:32.527 "write": true, 00:08:32.527 "unmap": true, 00:08:32.527 "write_zeroes": true, 00:08:32.527 "flush": true, 00:08:32.527 "reset": true, 00:08:32.527 "compare": false, 00:08:32.527 "compare_and_write": false, 00:08:32.527 "abort": true, 00:08:32.527 "nvme_admin": false, 00:08:32.527 "nvme_io": false 00:08:32.527 }, 00:08:32.527 "memory_domains": [ 00:08:32.527 { 00:08:32.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.527 "dma_device_type": 2 00:08:32.527 } 00:08:32.527 ], 00:08:32.527 "driver_specific": {} 00:08:32.527 } 00:08:32.527 ]' 00:08:32.527 05:05:51 -- rpc/rpc.sh@17 -- # jq length 00:08:32.527 05:05:51 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:32.527 05:05:51 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:32.528 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.528 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 [2024-07-26 05:05:51.405155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:32.528 [2024-07-26 05:05:51.405241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.528 [2024-07-26 05:05:51.405273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:08:32.528 [2024-07-26 05:05:51.405290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.528 [2024-07-26 05:05:51.407867] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.528 [2024-07-26 05:05:51.407942] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:32.528 Passthru0 00:08:32.528 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.528 05:05:51 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:32.528 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.528 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.528 05:05:51 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:32.528 { 00:08:32.528 "name": "Malloc2", 00:08:32.528 "aliases": [ 00:08:32.528 "ad94c6c2-23ea-4054-a88c-b6c4a19b0be0" 00:08:32.528 ], 00:08:32.528 "product_name": "Malloc disk", 00:08:32.528 "block_size": 512, 00:08:32.528 "num_blocks": 16384, 00:08:32.528 "uuid": "ad94c6c2-23ea-4054-a88c-b6c4a19b0be0", 00:08:32.528 "assigned_rate_limits": { 00:08:32.528 "rw_ios_per_sec": 0, 00:08:32.528 "rw_mbytes_per_sec": 0, 00:08:32.528 "r_mbytes_per_sec": 0, 00:08:32.528 "w_mbytes_per_sec": 0 00:08:32.528 }, 00:08:32.528 "claimed": true, 00:08:32.528 "claim_type": "exclusive_write", 00:08:32.528 "zoned": false, 00:08:32.528 "supported_io_types": { 00:08:32.528 "read": true, 00:08:32.528 "write": true, 00:08:32.528 "unmap": true, 00:08:32.528 "write_zeroes": true, 00:08:32.528 "flush": true, 00:08:32.528 "reset": true, 00:08:32.528 "compare": false, 00:08:32.528 "compare_and_write": false, 00:08:32.528 "abort": true, 00:08:32.528 "nvme_admin": false, 00:08:32.528 "nvme_io": false 00:08:32.528 }, 00:08:32.528 "memory_domains": [ 00:08:32.528 { 00:08:32.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.528 "dma_device_type": 2 00:08:32.528 } 00:08:32.528 ], 00:08:32.528 "driver_specific": {} 00:08:32.528 }, 00:08:32.528 { 00:08:32.528 "name": "Passthru0", 00:08:32.528 "aliases": [ 00:08:32.528 "cbedeb12-b4d1-58ac-a65c-ab2dab7b31f0" 00:08:32.528 ], 00:08:32.528 "product_name": "passthru", 00:08:32.528 "block_size": 512, 00:08:32.528 "num_blocks": 16384, 00:08:32.528 "uuid": "cbedeb12-b4d1-58ac-a65c-ab2dab7b31f0", 00:08:32.528 "assigned_rate_limits": { 00:08:32.528 "rw_ios_per_sec": 0, 00:08:32.528 "rw_mbytes_per_sec": 0, 00:08:32.528 "r_mbytes_per_sec": 0, 00:08:32.528 "w_mbytes_per_sec": 0 00:08:32.528 }, 00:08:32.528 "claimed": false, 00:08:32.528 "zoned": false, 00:08:32.528 "supported_io_types": { 00:08:32.528 "read": true, 00:08:32.528 "write": true, 00:08:32.528 "unmap": true, 00:08:32.528 "write_zeroes": true, 00:08:32.528 "flush": true, 00:08:32.528 "reset": true, 00:08:32.528 "compare": false, 00:08:32.528 "compare_and_write": false, 00:08:32.528 "abort": true, 00:08:32.528 "nvme_admin": false, 00:08:32.528 "nvme_io": false 00:08:32.528 }, 00:08:32.528 "memory_domains": [ 00:08:32.528 { 00:08:32.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.528 "dma_device_type": 2 00:08:32.528 } 00:08:32.528 ], 00:08:32.528 "driver_specific": { 00:08:32.528 "passthru": { 00:08:32.528 "name": "Passthru0", 00:08:32.528 "base_bdev_name": "Malloc2" 00:08:32.528 } 00:08:32.528 } 00:08:32.528 } 00:08:32.528 ]' 00:08:32.528 05:05:51 -- rpc/rpc.sh@21 -- # jq length 00:08:32.528 05:05:51 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:32.528 05:05:51 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:32.528 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.528 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.528 05:05:51 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:32.528 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.528 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.528 05:05:51 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:32.528 05:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.528 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 05:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.528 05:05:51 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:32.528 05:05:51 -- rpc/rpc.sh@26 -- # jq length 00:08:32.528 05:05:51 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:32.528 00:08:32.528 real 0m0.173s 00:08:32.528 user 0m0.049s 00:08:32.528 sys 0m0.040s 00:08:32.528 05:05:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.528 ************************************ 00:08:32.528 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 END TEST rpc_daemon_integrity 00:08:32.528 ************************************ 00:08:32.528 05:05:51 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:32.528 05:05:51 -- rpc/rpc.sh@84 -- # killprocess 60565 00:08:32.528 05:05:51 -- common/autotest_common.sh@926 -- # '[' -z 60565 ']' 00:08:32.528 05:05:51 -- common/autotest_common.sh@930 -- # kill -0 60565 00:08:32.528 05:05:51 -- common/autotest_common.sh@931 -- # uname 00:08:32.528 05:05:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:32.528 05:05:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60565 00:08:32.528 05:05:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:32.528 05:05:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:32.528 killing process with pid 60565 00:08:32.528 05:05:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60565' 00:08:32.528 05:05:51 -- common/autotest_common.sh@945 -- # kill 60565 00:08:32.528 05:05:51 -- common/autotest_common.sh@950 -- # wait 60565 00:08:34.429 00:08:34.429 real 0m4.430s 00:08:34.429 user 0m4.812s 00:08:34.429 sys 0m0.784s 00:08:34.429 05:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.429 05:05:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.429 ************************************ 00:08:34.429 END TEST rpc 00:08:34.429 ************************************ 00:08:34.429 05:05:53 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:34.429 05:05:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:34.429 05:05:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.429 05:05:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.429 ************************************ 00:08:34.429 START TEST rpc_client 00:08:34.429 ************************************ 00:08:34.429 05:05:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:34.687 * Looking for test storage... 00:08:34.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:34.687 05:05:53 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:34.687 OK 00:08:34.687 05:05:53 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:34.687 00:08:34.687 real 0m0.143s 00:08:34.687 user 0m0.071s 00:08:34.687 sys 0m0.083s 00:08:34.687 05:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.687 05:05:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.687 ************************************ 00:08:34.687 END TEST rpc_client 00:08:34.687 ************************************ 00:08:34.687 05:05:53 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:34.687 05:05:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:34.687 05:05:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.687 05:05:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.687 ************************************ 00:08:34.687 START TEST json_config 00:08:34.687 ************************************ 00:08:34.687 05:05:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:34.687 05:05:53 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.687 05:05:53 -- nvmf/common.sh@7 -- # uname -s 00:08:34.687 05:05:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.687 05:05:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.687 05:05:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.687 05:05:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.687 05:05:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.687 05:05:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.687 05:05:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.687 05:05:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.687 05:05:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.687 05:05:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.687 05:05:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:daa6cca6-b131-412d-ad3d-d3aef57713f9 00:08:34.687 05:05:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=daa6cca6-b131-412d-ad3d-d3aef57713f9 00:08:34.687 05:05:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.687 05:05:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.687 05:05:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:34.687 05:05:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.687 05:05:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.687 05:05:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.687 05:05:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.687 05:05:53 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:34.687 05:05:53 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:34.687 05:05:53 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:34.687 05:05:53 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:34.687 05:05:53 -- paths/export.sh@6 -- # export PATH 00:08:34.687 05:05:53 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:34.687 05:05:53 -- nvmf/common.sh@46 -- # : 0 00:08:34.687 05:05:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:34.687 05:05:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:34.687 05:05:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:34.687 05:05:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.687 05:05:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.687 05:05:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:34.687 05:05:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:34.687 05:05:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:34.687 05:05:53 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:34.687 05:05:53 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:34.687 05:05:53 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:34.687 05:05:53 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:34.687 05:05:53 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:08:34.687 05:05:53 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:08:34.687 05:05:53 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:34.687 05:05:53 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:08:34.687 05:05:53 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:34.687 05:05:53 -- json_config/json_config.sh@32 -- # declare -A app_params 00:08:34.687 05:05:53 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:08:34.687 05:05:53 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:08:34.687 05:05:53 -- json_config/json_config.sh@43 -- # last_event_id=0 00:08:34.687 05:05:53 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:34.687 INFO: JSON configuration test init 00:08:34.687 05:05:53 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:08:34.687 05:05:53 -- json_config/json_config.sh@420 -- # json_config_test_init 00:08:34.687 05:05:53 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:08:34.687 05:05:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:34.687 05:05:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.687 05:05:53 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:08:34.687 05:05:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:34.688 05:05:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.688 05:05:53 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:08:34.688 05:05:53 -- json_config/json_config.sh@98 -- # local app=target 00:08:34.688 05:05:53 -- json_config/json_config.sh@99 -- # shift 00:08:34.688 05:05:53 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:34.688 05:05:53 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:34.688 05:05:53 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:34.688 05:05:53 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:34.688 05:05:53 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:34.688 05:05:53 -- json_config/json_config.sh@111 -- # app_pid[$app]=60823 00:08:34.688 Waiting for target to run... 00:08:34.688 05:05:53 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:34.688 05:05:53 -- json_config/json_config.sh@114 -- # waitforlisten 60823 /var/tmp/spdk_tgt.sock 00:08:34.688 05:05:53 -- common/autotest_common.sh@819 -- # '[' -z 60823 ']' 00:08:34.688 05:05:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:34.688 05:05:53 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:34.688 05:05:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:34.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:34.688 05:05:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:34.688 05:05:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:34.688 05:05:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.945 [2024-07-26 05:05:53.834217] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:34.945 [2024-07-26 05:05:53.834402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60823 ] 00:08:35.204 [2024-07-26 05:05:54.196983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.463 [2024-07-26 05:05:54.342389] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:35.463 [2024-07-26 05:05:54.342649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.722 05:05:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:35.722 05:05:54 -- common/autotest_common.sh@852 -- # return 0 00:08:35.722 00:08:35.722 05:05:54 -- json_config/json_config.sh@115 -- # echo '' 00:08:35.722 05:05:54 -- json_config/json_config.sh@322 -- # create_accel_config 00:08:35.722 05:05:54 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:08:35.722 05:05:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:35.722 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:08:35.722 05:05:54 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:08:35.722 05:05:54 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:08:35.722 05:05:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:35.722 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:08:35.722 05:05:54 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:35.722 05:05:54 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:08:35.722 05:05:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:36.657 05:05:55 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:08:36.657 05:05:55 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:08:36.657 05:05:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:36.657 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:08:36.657 05:05:55 -- json_config/json_config.sh@48 -- # local ret=0 00:08:36.657 05:05:55 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:36.657 05:05:55 -- json_config/json_config.sh@49 -- # local enabled_types 00:08:36.657 05:05:55 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:36.657 05:05:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:36.657 05:05:55 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:36.915 05:05:55 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:36.915 05:05:55 -- json_config/json_config.sh@51 -- # local get_types 00:08:36.915 05:05:55 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:36.915 05:05:55 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:08:36.915 05:05:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:36.915 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:08:36.915 05:05:55 -- json_config/json_config.sh@58 -- # return 0 00:08:36.915 05:05:55 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:08:36.915 05:05:55 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:08:36.915 05:05:55 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:08:36.915 05:05:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:36.915 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:08:36.915 05:05:55 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:08:36.915 05:05:55 -- json_config/json_config.sh@160 -- # local expected_notifications 00:08:36.915 05:05:55 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:08:36.915 05:05:55 -- json_config/json_config.sh@164 -- # get_notifications 00:08:36.915 05:05:55 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:36.915 05:05:55 -- json_config/json_config.sh@64 -- # IFS=: 00:08:36.915 05:05:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:36.915 05:05:55 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:36.915 05:05:55 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:36.915 05:05:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:37.173 05:05:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:37.173 05:05:56 -- json_config/json_config.sh@64 -- # IFS=: 00:08:37.173 05:05:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:37.173 05:05:56 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:08:37.173 05:05:56 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:08:37.173 05:05:56 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:08:37.173 05:05:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:08:37.432 Nvme0n1p0 Nvme0n1p1 00:08:37.432 05:05:56 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:08:37.432 05:05:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:08:37.690 [2024-07-26 05:05:56.736394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:37.690 [2024-07-26 05:05:56.736507] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:37.690 00:08:37.690 05:05:56 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:08:37.690 05:05:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:08:37.948 Malloc3 00:08:37.948 05:05:56 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:37.948 05:05:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:38.207 [2024-07-26 05:05:57.168805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:38.207 [2024-07-26 05:05:57.168897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.207 [2024-07-26 05:05:57.168929] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:08:38.207 [2024-07-26 05:05:57.168947] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.207 [2024-07-26 05:05:57.171626] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.207 [2024-07-26 05:05:57.171688] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:38.207 PTBdevFromMalloc3 00:08:38.207 05:05:57 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:08:38.207 05:05:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:08:38.465 Null0 00:08:38.465 05:05:57 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:08:38.465 05:05:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:08:38.723 Malloc0 00:08:38.723 05:05:57 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:08:38.724 05:05:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:08:38.982 Malloc1 00:08:38.982 05:05:57 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:08:38.982 05:05:57 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:08:39.240 102400+0 records in 00:08:39.240 102400+0 records out 00:08:39.240 104857600 bytes (105 MB, 100 MiB) copied, 0.275596 s, 380 MB/s 00:08:39.240 05:05:58 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:08:39.240 05:05:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:08:39.498 aio_disk 00:08:39.498 05:05:58 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:08:39.498 05:05:58 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:39.498 05:05:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:39.770 b3641690-e824-4465-86f2-2ab95e46e7d7 00:08:39.770 05:05:58 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:08:39.770 05:05:58 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:08:39.770 05:05:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:08:40.040 05:05:58 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:08:40.040 05:05:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:08:40.040 05:05:59 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:40.040 05:05:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:40.298 05:05:59 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:40.298 05:05:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:40.557 05:05:59 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:08:40.557 05:05:59 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:08:40.557 05:05:59 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:345f1c86-2cb1-41b1-a74b-807482c099c8 bdev_register:f9cf4122-734c-4d47-b497-12eb66531d28 bdev_register:804364c9-871b-45d6-a83d-e89ff55ed0ec bdev_register:a51db330-6281-401a-946c-e2146a44ab54 00:08:40.557 05:05:59 -- json_config/json_config.sh@70 -- # local events_to_check 00:08:40.557 05:05:59 -- json_config/json_config.sh@71 -- # local recorded_events 00:08:40.557 05:05:59 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:08:40.557 05:05:59 -- json_config/json_config.sh@74 -- # sort 00:08:40.557 05:05:59 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:345f1c86-2cb1-41b1-a74b-807482c099c8 bdev_register:f9cf4122-734c-4d47-b497-12eb66531d28 bdev_register:804364c9-871b-45d6-a83d-e89ff55ed0ec bdev_register:a51db330-6281-401a-946c-e2146a44ab54 00:08:40.557 05:05:59 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:08:40.557 05:05:59 -- json_config/json_config.sh@75 -- # get_notifications 00:08:40.557 05:05:59 -- json_config/json_config.sh@75 -- # sort 00:08:40.557 05:05:59 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:40.557 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.557 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.557 05:05:59 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:40.557 05:05:59 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:40.557 05:05:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:345f1c86-2cb1-41b1-a74b-807482c099c8 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:f9cf4122-734c-4d47-b497-12eb66531d28 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:804364c9-871b-45d6-a83d-e89ff55ed0ec 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@65 -- # echo bdev_register:a51db330-6281-401a-946c-e2146a44ab54 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # IFS=: 00:08:40.817 05:05:59 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:40.817 05:05:59 -- json_config/json_config.sh@77 -- # [[ bdev_register:345f1c86-2cb1-41b1-a74b-807482c099c8 bdev_register:804364c9-871b-45d6-a83d-e89ff55ed0ec bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a51db330-6281-401a-946c-e2146a44ab54 bdev_register:aio_disk bdev_register:f9cf4122-734c-4d47-b497-12eb66531d28 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\4\5\f\1\c\8\6\-\2\c\b\1\-\4\1\b\1\-\a\7\4\b\-\8\0\7\4\8\2\c\0\9\9\c\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\0\4\3\6\4\c\9\-\8\7\1\b\-\4\5\d\6\-\a\8\3\d\-\e\8\9\f\f\5\5\e\d\0\e\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\5\1\d\b\3\3\0\-\6\2\8\1\-\4\0\1\a\-\9\4\6\c\-\e\2\1\4\6\a\4\4\a\b\5\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\9\c\f\4\1\2\2\-\7\3\4\c\-\4\d\4\7\-\b\4\9\7\-\1\2\e\b\6\6\5\3\1\d\2\8 ]] 00:08:40.817 05:05:59 -- json_config/json_config.sh@89 -- # cat 00:08:40.817 05:05:59 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:345f1c86-2cb1-41b1-a74b-807482c099c8 bdev_register:804364c9-871b-45d6-a83d-e89ff55ed0ec bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a51db330-6281-401a-946c-e2146a44ab54 bdev_register:aio_disk bdev_register:f9cf4122-734c-4d47-b497-12eb66531d28 00:08:40.817 Expected events matched: 00:08:40.817 bdev_register:345f1c86-2cb1-41b1-a74b-807482c099c8 00:08:40.817 bdev_register:804364c9-871b-45d6-a83d-e89ff55ed0ec 00:08:40.817 bdev_register:Malloc0 00:08:40.817 bdev_register:Malloc0p0 00:08:40.817 bdev_register:Malloc0p1 00:08:40.817 bdev_register:Malloc0p2 00:08:40.817 bdev_register:Malloc1 00:08:40.817 bdev_register:Malloc3 00:08:40.817 bdev_register:Null0 00:08:40.817 bdev_register:Nvme0n1 00:08:40.817 bdev_register:Nvme0n1p0 00:08:40.817 bdev_register:Nvme0n1p1 00:08:40.817 bdev_register:PTBdevFromMalloc3 00:08:40.817 bdev_register:a51db330-6281-401a-946c-e2146a44ab54 00:08:40.817 bdev_register:aio_disk 00:08:40.817 bdev_register:f9cf4122-734c-4d47-b497-12eb66531d28 00:08:40.817 05:05:59 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:08:40.817 05:05:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:40.817 05:05:59 -- common/autotest_common.sh@10 -- # set +x 00:08:41.077 05:05:59 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:08:41.077 05:05:59 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:08:41.077 05:05:59 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:08:41.077 05:05:59 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:08:41.077 05:05:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:41.077 05:05:59 -- common/autotest_common.sh@10 -- # set +x 00:08:41.077 05:05:59 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:08:41.077 05:05:59 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:41.077 05:05:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:41.335 MallocBdevForConfigChangeCheck 00:08:41.335 05:06:00 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:08:41.335 05:06:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:41.335 05:06:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.335 05:06:00 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:08:41.335 05:06:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:41.593 INFO: shutting down applications... 00:08:41.593 05:06:00 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:08:41.593 05:06:00 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:08:41.593 05:06:00 -- json_config/json_config.sh@431 -- # json_config_clear target 00:08:41.593 05:06:00 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:08:41.593 05:06:00 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:41.852 [2024-07-26 05:06:00.881704] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:08:42.110 Calling clear_vhost_scsi_subsystem 00:08:42.110 Calling clear_iscsi_subsystem 00:08:42.110 Calling clear_vhost_blk_subsystem 00:08:42.110 Calling clear_ublk_subsystem 00:08:42.110 Calling clear_nbd_subsystem 00:08:42.110 Calling clear_nvmf_subsystem 00:08:42.110 Calling clear_bdev_subsystem 00:08:42.110 Calling clear_accel_subsystem 00:08:42.110 Calling clear_iobuf_subsystem 00:08:42.110 Calling clear_sock_subsystem 00:08:42.110 Calling clear_vmd_subsystem 00:08:42.110 Calling clear_scheduler_subsystem 00:08:42.110 05:06:01 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:42.110 05:06:01 -- json_config/json_config.sh@396 -- # count=100 00:08:42.110 05:06:01 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:08:42.110 05:06:01 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:42.110 05:06:01 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:42.110 05:06:01 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:42.368 05:06:01 -- json_config/json_config.sh@398 -- # break 00:08:42.369 05:06:01 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:08:42.369 05:06:01 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:08:42.369 05:06:01 -- json_config/json_config.sh@120 -- # local app=target 00:08:42.369 05:06:01 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:08:42.369 05:06:01 -- json_config/json_config.sh@124 -- # [[ -n 60823 ]] 00:08:42.369 05:06:01 -- json_config/json_config.sh@127 -- # kill -SIGINT 60823 00:08:42.369 05:06:01 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:08:42.369 05:06:01 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:42.369 05:06:01 -- json_config/json_config.sh@130 -- # kill -0 60823 00:08:42.369 05:06:01 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:42.936 05:06:01 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:42.936 05:06:01 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:42.936 05:06:01 -- json_config/json_config.sh@130 -- # kill -0 60823 00:08:42.936 05:06:01 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:43.503 05:06:02 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:43.504 05:06:02 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:43.504 05:06:02 -- json_config/json_config.sh@130 -- # kill -0 60823 00:08:43.504 05:06:02 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:08:43.504 05:06:02 -- json_config/json_config.sh@132 -- # break 00:08:43.504 05:06:02 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:08:43.504 SPDK target shutdown done 00:08:43.504 05:06:02 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:08:43.504 INFO: relaunching applications... 00:08:43.504 05:06:02 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:08:43.504 05:06:02 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:43.504 05:06:02 -- json_config/json_config.sh@98 -- # local app=target 00:08:43.504 05:06:02 -- json_config/json_config.sh@99 -- # shift 00:08:43.504 05:06:02 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:43.504 05:06:02 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:43.504 05:06:02 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:43.504 05:06:02 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:43.504 05:06:02 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:43.504 05:06:02 -- json_config/json_config.sh@111 -- # app_pid[$app]=61069 00:08:43.504 05:06:02 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:43.504 Waiting for target to run... 00:08:43.504 05:06:02 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:43.504 05:06:02 -- json_config/json_config.sh@114 -- # waitforlisten 61069 /var/tmp/spdk_tgt.sock 00:08:43.504 05:06:02 -- common/autotest_common.sh@819 -- # '[' -z 61069 ']' 00:08:43.504 05:06:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:43.504 05:06:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:43.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:43.504 05:06:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:43.504 05:06:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:43.504 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:08:43.504 [2024-07-26 05:06:02.541101] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:43.504 [2024-07-26 05:06:02.541230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61069 ] 00:08:43.762 [2024-07-26 05:06:02.868331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.021 [2024-07-26 05:06:03.039286] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:44.021 [2024-07-26 05:06:03.039568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.957 [2024-07-26 05:06:03.725538] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:44.957 [2024-07-26 05:06:03.725621] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:44.957 [2024-07-26 05:06:03.733473] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:44.957 [2024-07-26 05:06:03.733541] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:44.957 [2024-07-26 05:06:03.741510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:44.957 [2024-07-26 05:06:03.741556] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:44.957 [2024-07-26 05:06:03.741573] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:44.957 [2024-07-26 05:06:03.835483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:44.957 [2024-07-26 05:06:03.835592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.957 [2024-07-26 05:06:03.835615] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:08:44.957 [2024-07-26 05:06:03.835627] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.957 [2024-07-26 05:06:03.836116] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.957 [2024-07-26 05:06:03.836182] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:45.215 05:06:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:45.215 05:06:04 -- common/autotest_common.sh@852 -- # return 0 00:08:45.215 00:08:45.215 05:06:04 -- json_config/json_config.sh@115 -- # echo '' 00:08:45.215 05:06:04 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:08:45.215 05:06:04 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:45.215 INFO: Checking if target configuration is the same... 00:08:45.215 05:06:04 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:45.216 05:06:04 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:08:45.216 05:06:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:45.216 + '[' 2 -ne 2 ']' 00:08:45.216 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:45.216 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:45.216 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:45.216 +++ basename /dev/fd/62 00:08:45.216 ++ mktemp /tmp/62.XXX 00:08:45.216 + tmp_file_1=/tmp/62.HXQ 00:08:45.216 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:45.216 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:45.216 + tmp_file_2=/tmp/spdk_tgt_config.json.EeR 00:08:45.216 + ret=0 00:08:45.216 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:45.782 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:45.782 + diff -u /tmp/62.HXQ /tmp/spdk_tgt_config.json.EeR 00:08:45.782 INFO: JSON config files are the same 00:08:45.782 + echo 'INFO: JSON config files are the same' 00:08:45.782 + rm /tmp/62.HXQ /tmp/spdk_tgt_config.json.EeR 00:08:45.782 + exit 0 00:08:45.782 05:06:04 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:08:45.782 INFO: changing configuration and checking if this can be detected... 00:08:45.783 05:06:04 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:45.783 05:06:04 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:45.783 05:06:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:46.041 05:06:04 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:46.041 05:06:04 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:08:46.041 05:06:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:46.041 + '[' 2 -ne 2 ']' 00:08:46.041 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:46.041 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:46.041 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:46.041 +++ basename /dev/fd/62 00:08:46.041 ++ mktemp /tmp/62.XXX 00:08:46.041 + tmp_file_1=/tmp/62.gC0 00:08:46.041 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:46.041 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:46.041 + tmp_file_2=/tmp/spdk_tgt_config.json.LZJ 00:08:46.041 + ret=0 00:08:46.041 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:46.300 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:46.300 + diff -u /tmp/62.gC0 /tmp/spdk_tgt_config.json.LZJ 00:08:46.300 + ret=1 00:08:46.300 + echo '=== Start of file: /tmp/62.gC0 ===' 00:08:46.300 + cat /tmp/62.gC0 00:08:46.300 + echo '=== End of file: /tmp/62.gC0 ===' 00:08:46.300 + echo '' 00:08:46.300 + echo '=== Start of file: /tmp/spdk_tgt_config.json.LZJ ===' 00:08:46.300 + cat /tmp/spdk_tgt_config.json.LZJ 00:08:46.300 + echo '=== End of file: /tmp/spdk_tgt_config.json.LZJ ===' 00:08:46.300 + echo '' 00:08:46.300 + rm /tmp/62.gC0 /tmp/spdk_tgt_config.json.LZJ 00:08:46.300 + exit 1 00:08:46.300 INFO: configuration change detected. 00:08:46.300 05:06:05 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:08:46.300 05:06:05 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:08:46.300 05:06:05 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:08:46.300 05:06:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.300 05:06:05 -- common/autotest_common.sh@10 -- # set +x 00:08:46.300 05:06:05 -- json_config/json_config.sh@360 -- # local ret=0 00:08:46.300 05:06:05 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:08:46.300 05:06:05 -- json_config/json_config.sh@370 -- # [[ -n 61069 ]] 00:08:46.558 05:06:05 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:08:46.558 05:06:05 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:08:46.558 05:06:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.558 05:06:05 -- common/autotest_common.sh@10 -- # set +x 00:08:46.558 05:06:05 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:08:46.558 05:06:05 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:08:46.558 05:06:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:08:46.817 05:06:05 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:08:46.817 05:06:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:08:46.817 05:06:05 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:08:46.817 05:06:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:08:47.076 05:06:06 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:08:47.076 05:06:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:08:47.334 05:06:06 -- json_config/json_config.sh@246 -- # uname -s 00:08:47.334 05:06:06 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:08:47.334 05:06:06 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:08:47.334 05:06:06 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:08:47.334 05:06:06 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:08:47.334 05:06:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:47.334 05:06:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 05:06:06 -- json_config/json_config.sh@376 -- # killprocess 61069 00:08:47.334 05:06:06 -- common/autotest_common.sh@926 -- # '[' -z 61069 ']' 00:08:47.334 05:06:06 -- common/autotest_common.sh@930 -- # kill -0 61069 00:08:47.334 05:06:06 -- common/autotest_common.sh@931 -- # uname 00:08:47.334 05:06:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:47.334 05:06:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61069 00:08:47.334 killing process with pid 61069 00:08:47.334 05:06:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:47.334 05:06:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:47.334 05:06:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61069' 00:08:47.334 05:06:06 -- common/autotest_common.sh@945 -- # kill 61069 00:08:47.334 05:06:06 -- common/autotest_common.sh@950 -- # wait 61069 00:08:48.270 05:06:07 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:48.271 05:06:07 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:08:48.271 05:06:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:48.271 05:06:07 -- common/autotest_common.sh@10 -- # set +x 00:08:48.530 INFO: Success 00:08:48.530 05:06:07 -- json_config/json_config.sh@381 -- # return 0 00:08:48.530 05:06:07 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:08:48.530 ************************************ 00:08:48.530 END TEST json_config 00:08:48.530 ************************************ 00:08:48.530 00:08:48.530 real 0m13.726s 00:08:48.530 user 0m20.120s 00:08:48.530 sys 0m2.300s 00:08:48.530 05:06:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.530 05:06:07 -- common/autotest_common.sh@10 -- # set +x 00:08:48.530 05:06:07 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:48.530 05:06:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:48.530 05:06:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.530 05:06:07 -- common/autotest_common.sh@10 -- # set +x 00:08:48.530 ************************************ 00:08:48.530 START TEST json_config_extra_key 00:08:48.530 ************************************ 00:08:48.530 05:06:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:48.530 05:06:07 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:48.530 05:06:07 -- nvmf/common.sh@7 -- # uname -s 00:08:48.530 05:06:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.530 05:06:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.530 05:06:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.530 05:06:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.530 05:06:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.530 05:06:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.530 05:06:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.530 05:06:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.530 05:06:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.530 05:06:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.530 05:06:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:daa6cca6-b131-412d-ad3d-d3aef57713f9 00:08:48.530 05:06:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=daa6cca6-b131-412d-ad3d-d3aef57713f9 00:08:48.530 05:06:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.530 05:06:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.530 05:06:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:48.530 05:06:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.530 05:06:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.530 05:06:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.530 05:06:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.530 05:06:07 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:48.530 05:06:07 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:48.530 05:06:07 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:48.530 05:06:07 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:48.530 05:06:07 -- paths/export.sh@6 -- # export PATH 00:08:48.530 05:06:07 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:48.530 05:06:07 -- nvmf/common.sh@46 -- # : 0 00:08:48.530 05:06:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:48.530 05:06:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:48.531 05:06:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:48.531 05:06:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.531 05:06:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.531 05:06:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:48.531 05:06:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:48.531 05:06:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:48.531 INFO: launching applications... 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=61243 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:48.531 Waiting for target to run... 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:48.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:48.531 05:06:07 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 61243 /var/tmp/spdk_tgt.sock 00:08:48.531 05:06:07 -- common/autotest_common.sh@819 -- # '[' -z 61243 ']' 00:08:48.531 05:06:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:48.531 05:06:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:48.531 05:06:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:48.531 05:06:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:48.531 05:06:07 -- common/autotest_common.sh@10 -- # set +x 00:08:48.531 [2024-07-26 05:06:07.616007] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:48.531 [2024-07-26 05:06:07.616242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61243 ] 00:08:49.098 [2024-07-26 05:06:07.967385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.098 [2024-07-26 05:06:08.170686] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:49.098 [2024-07-26 05:06:08.170947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.473 00:08:50.473 INFO: shutting down applications... 00:08:50.473 05:06:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:50.473 05:06:09 -- common/autotest_common.sh@852 -- # return 0 00:08:50.473 05:06:09 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:50.473 05:06:09 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:50.473 05:06:09 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:50.473 05:06:09 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:50.473 05:06:09 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:50.473 05:06:09 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 61243 ]] 00:08:50.473 05:06:09 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 61243 00:08:50.473 05:06:09 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:50.473 05:06:09 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:50.473 05:06:09 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61243 00:08:50.473 05:06:09 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:50.732 05:06:09 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:50.732 05:06:09 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:50.732 05:06:09 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61243 00:08:50.732 05:06:09 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:51.299 05:06:10 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:51.299 05:06:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:51.299 05:06:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61243 00:08:51.299 05:06:10 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:51.866 05:06:10 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:51.866 05:06:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:51.866 05:06:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61243 00:08:51.866 05:06:10 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:52.433 05:06:11 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:52.433 05:06:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:52.433 05:06:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61243 00:08:52.433 05:06:11 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:53.000 05:06:11 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:53.000 05:06:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:53.000 05:06:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61243 00:08:53.000 05:06:11 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:08:53.000 SPDK target shutdown done 00:08:53.000 05:06:11 -- json_config/json_config_extra_key.sh@52 -- # break 00:08:53.000 05:06:11 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:08:53.000 05:06:11 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:08:53.000 Success 00:08:53.000 05:06:11 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:08:53.000 ************************************ 00:08:53.000 END TEST json_config_extra_key 00:08:53.000 ************************************ 00:08:53.000 00:08:53.000 real 0m4.402s 00:08:53.000 user 0m4.412s 00:08:53.000 sys 0m0.565s 00:08:53.000 05:06:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.000 05:06:11 -- common/autotest_common.sh@10 -- # set +x 00:08:53.000 05:06:11 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:53.000 05:06:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:53.000 05:06:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:53.000 05:06:11 -- common/autotest_common.sh@10 -- # set +x 00:08:53.000 ************************************ 00:08:53.000 START TEST alias_rpc 00:08:53.000 ************************************ 00:08:53.000 05:06:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:53.000 * Looking for test storage... 00:08:53.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:53.000 05:06:11 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:53.000 05:06:11 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61347 00:08:53.000 05:06:11 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61347 00:08:53.000 05:06:11 -- common/autotest_common.sh@819 -- # '[' -z 61347 ']' 00:08:53.000 05:06:11 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:53.000 05:06:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.000 05:06:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:53.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.000 05:06:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.000 05:06:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:53.000 05:06:11 -- common/autotest_common.sh@10 -- # set +x 00:08:53.000 [2024-07-26 05:06:12.058923] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:53.000 [2024-07-26 05:06:12.059125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61347 ] 00:08:53.259 [2024-07-26 05:06:12.230040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.518 [2024-07-26 05:06:12.410521] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.518 [2024-07-26 05:06:12.410770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.891 05:06:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:54.891 05:06:13 -- common/autotest_common.sh@852 -- # return 0 00:08:54.891 05:06:13 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:55.149 05:06:14 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61347 00:08:55.149 05:06:14 -- common/autotest_common.sh@926 -- # '[' -z 61347 ']' 00:08:55.149 05:06:14 -- common/autotest_common.sh@930 -- # kill -0 61347 00:08:55.149 05:06:14 -- common/autotest_common.sh@931 -- # uname 00:08:55.149 05:06:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:55.149 05:06:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61347 00:08:55.149 05:06:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:55.149 05:06:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:55.149 killing process with pid 61347 00:08:55.149 05:06:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61347' 00:08:55.149 05:06:14 -- common/autotest_common.sh@945 -- # kill 61347 00:08:55.149 05:06:14 -- common/autotest_common.sh@950 -- # wait 61347 00:08:57.053 00:08:57.053 real 0m4.096s 00:08:57.053 user 0m4.512s 00:08:57.053 sys 0m0.542s 00:08:57.053 05:06:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.053 05:06:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.053 ************************************ 00:08:57.053 END TEST alias_rpc 00:08:57.053 ************************************ 00:08:57.053 05:06:16 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:08:57.053 05:06:16 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:57.053 05:06:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:57.053 05:06:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.053 05:06:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.053 ************************************ 00:08:57.053 START TEST spdkcli_tcp 00:08:57.053 ************************************ 00:08:57.053 05:06:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:57.053 * Looking for test storage... 00:08:57.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:57.053 05:06:16 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:57.053 05:06:16 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:57.053 05:06:16 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:57.053 05:06:16 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:57.053 05:06:16 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:57.053 05:06:16 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:57.053 05:06:16 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:57.053 05:06:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:57.053 05:06:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.053 05:06:16 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=61447 00:08:57.053 05:06:16 -- spdkcli/tcp.sh@27 -- # waitforlisten 61447 00:08:57.053 05:06:16 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:57.053 05:06:16 -- common/autotest_common.sh@819 -- # '[' -z 61447 ']' 00:08:57.053 05:06:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.053 05:06:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:57.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.053 05:06:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.053 05:06:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:57.053 05:06:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.312 [2024-07-26 05:06:16.208339] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:57.312 [2024-07-26 05:06:16.208503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61447 ] 00:08:57.312 [2024-07-26 05:06:16.381107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.584 [2024-07-26 05:06:16.554810] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:57.584 [2024-07-26 05:06:16.555133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.584 [2024-07-26 05:06:16.555324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.984 05:06:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:58.984 05:06:17 -- common/autotest_common.sh@852 -- # return 0 00:08:58.984 05:06:17 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:58.984 05:06:17 -- spdkcli/tcp.sh@31 -- # socat_pid=61472 00:08:58.984 05:06:17 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:58.984 [ 00:08:58.984 "spdk_get_version", 00:08:58.984 "rpc_get_methods", 00:08:58.984 "trace_get_info", 00:08:58.984 "trace_get_tpoint_group_mask", 00:08:58.984 "trace_disable_tpoint_group", 00:08:58.984 "trace_enable_tpoint_group", 00:08:58.984 "trace_clear_tpoint_mask", 00:08:58.984 "trace_set_tpoint_mask", 00:08:58.984 "framework_get_pci_devices", 00:08:58.984 "framework_get_config", 00:08:58.984 "framework_get_subsystems", 00:08:58.984 "iobuf_get_stats", 00:08:58.984 "iobuf_set_options", 00:08:58.984 "sock_set_default_impl", 00:08:58.984 "sock_impl_set_options", 00:08:58.984 "sock_impl_get_options", 00:08:58.984 "vmd_rescan", 00:08:58.984 "vmd_remove_device", 00:08:58.984 "vmd_enable", 00:08:58.984 "accel_get_stats", 00:08:58.984 "accel_set_options", 00:08:58.984 "accel_set_driver", 00:08:58.984 "accel_crypto_key_destroy", 00:08:58.984 "accel_crypto_keys_get", 00:08:58.984 "accel_crypto_key_create", 00:08:58.984 "accel_assign_opc", 00:08:58.984 "accel_get_module_info", 00:08:58.984 "accel_get_opc_assignments", 00:08:58.984 "notify_get_notifications", 00:08:58.984 "notify_get_types", 00:08:58.984 "bdev_get_histogram", 00:08:58.984 "bdev_enable_histogram", 00:08:58.984 "bdev_set_qos_limit", 00:08:58.984 "bdev_set_qd_sampling_period", 00:08:58.984 "bdev_get_bdevs", 00:08:58.984 "bdev_reset_iostat", 00:08:58.984 "bdev_get_iostat", 00:08:58.984 "bdev_examine", 00:08:58.984 "bdev_wait_for_examine", 00:08:58.984 "bdev_set_options", 00:08:58.984 "scsi_get_devices", 00:08:58.984 "thread_set_cpumask", 00:08:58.984 "framework_get_scheduler", 00:08:58.984 "framework_set_scheduler", 00:08:58.984 "framework_get_reactors", 00:08:58.984 "thread_get_io_channels", 00:08:58.984 "thread_get_pollers", 00:08:58.984 "thread_get_stats", 00:08:58.984 "framework_monitor_context_switch", 00:08:58.984 "spdk_kill_instance", 00:08:58.984 "log_enable_timestamps", 00:08:58.984 "log_get_flags", 00:08:58.984 "log_clear_flag", 00:08:58.984 "log_set_flag", 00:08:58.984 "log_get_level", 00:08:58.984 "log_set_level", 00:08:58.984 "log_get_print_level", 00:08:58.984 "log_set_print_level", 00:08:58.984 "framework_enable_cpumask_locks", 00:08:58.984 "framework_disable_cpumask_locks", 00:08:58.984 "framework_wait_init", 00:08:58.984 "framework_start_init", 00:08:58.984 "virtio_blk_create_transport", 00:08:58.984 "virtio_blk_get_transports", 00:08:58.984 "vhost_controller_set_coalescing", 00:08:58.984 "vhost_get_controllers", 00:08:58.984 "vhost_delete_controller", 00:08:58.984 "vhost_create_blk_controller", 00:08:58.984 "vhost_scsi_controller_remove_target", 00:08:58.984 "vhost_scsi_controller_add_target", 00:08:58.984 "vhost_start_scsi_controller", 00:08:58.984 "vhost_create_scsi_controller", 00:08:58.984 "ublk_recover_disk", 00:08:58.984 "ublk_get_disks", 00:08:58.984 "ublk_stop_disk", 00:08:58.984 "ublk_start_disk", 00:08:58.984 "ublk_destroy_target", 00:08:58.984 "ublk_create_target", 00:08:58.984 "nbd_get_disks", 00:08:58.984 "nbd_stop_disk", 00:08:58.984 "nbd_start_disk", 00:08:58.984 "env_dpdk_get_mem_stats", 00:08:58.984 "nvmf_subsystem_get_listeners", 00:08:58.984 "nvmf_subsystem_get_qpairs", 00:08:58.984 "nvmf_subsystem_get_controllers", 00:08:58.984 "nvmf_get_stats", 00:08:58.984 "nvmf_get_transports", 00:08:58.984 "nvmf_create_transport", 00:08:58.984 "nvmf_get_targets", 00:08:58.984 "nvmf_delete_target", 00:08:58.984 "nvmf_create_target", 00:08:58.984 "nvmf_subsystem_allow_any_host", 00:08:58.984 "nvmf_subsystem_remove_host", 00:08:58.984 "nvmf_subsystem_add_host", 00:08:58.984 "nvmf_subsystem_remove_ns", 00:08:58.984 "nvmf_subsystem_add_ns", 00:08:58.984 "nvmf_subsystem_listener_set_ana_state", 00:08:58.984 "nvmf_discovery_get_referrals", 00:08:58.984 "nvmf_discovery_remove_referral", 00:08:58.984 "nvmf_discovery_add_referral", 00:08:58.984 "nvmf_subsystem_remove_listener", 00:08:58.984 "nvmf_subsystem_add_listener", 00:08:58.984 "nvmf_delete_subsystem", 00:08:58.984 "nvmf_create_subsystem", 00:08:58.984 "nvmf_get_subsystems", 00:08:58.984 "nvmf_set_crdt", 00:08:58.984 "nvmf_set_config", 00:08:58.984 "nvmf_set_max_subsystems", 00:08:58.984 "iscsi_set_options", 00:08:58.984 "iscsi_get_auth_groups", 00:08:58.984 "iscsi_auth_group_remove_secret", 00:08:58.984 "iscsi_auth_group_add_secret", 00:08:58.984 "iscsi_delete_auth_group", 00:08:58.984 "iscsi_create_auth_group", 00:08:58.984 "iscsi_set_discovery_auth", 00:08:58.985 "iscsi_get_options", 00:08:58.985 "iscsi_target_node_request_logout", 00:08:58.985 "iscsi_target_node_set_redirect", 00:08:58.985 "iscsi_target_node_set_auth", 00:08:58.985 "iscsi_target_node_add_lun", 00:08:58.985 "iscsi_get_connections", 00:08:58.985 "iscsi_portal_group_set_auth", 00:08:58.985 "iscsi_start_portal_group", 00:08:58.985 "iscsi_delete_portal_group", 00:08:58.985 "iscsi_create_portal_group", 00:08:58.985 "iscsi_get_portal_groups", 00:08:58.985 "iscsi_delete_target_node", 00:08:58.985 "iscsi_target_node_remove_pg_ig_maps", 00:08:58.985 "iscsi_target_node_add_pg_ig_maps", 00:08:58.985 "iscsi_create_target_node", 00:08:58.985 "iscsi_get_target_nodes", 00:08:58.985 "iscsi_delete_initiator_group", 00:08:58.985 "iscsi_initiator_group_remove_initiators", 00:08:58.985 "iscsi_initiator_group_add_initiators", 00:08:58.985 "iscsi_create_initiator_group", 00:08:58.985 "iscsi_get_initiator_groups", 00:08:58.985 "iaa_scan_accel_module", 00:08:58.985 "dsa_scan_accel_module", 00:08:58.985 "ioat_scan_accel_module", 00:08:58.985 "accel_error_inject_error", 00:08:58.985 "bdev_iscsi_delete", 00:08:58.985 "bdev_iscsi_create", 00:08:58.985 "bdev_iscsi_set_options", 00:08:58.985 "bdev_virtio_attach_controller", 00:08:58.985 "bdev_virtio_scsi_get_devices", 00:08:58.985 "bdev_virtio_detach_controller", 00:08:58.985 "bdev_virtio_blk_set_hotplug", 00:08:58.985 "bdev_ftl_set_property", 00:08:58.985 "bdev_ftl_get_properties", 00:08:58.985 "bdev_ftl_get_stats", 00:08:58.985 "bdev_ftl_unmap", 00:08:58.985 "bdev_ftl_unload", 00:08:58.985 "bdev_ftl_delete", 00:08:58.985 "bdev_ftl_load", 00:08:58.985 "bdev_ftl_create", 00:08:58.985 "bdev_aio_delete", 00:08:58.985 "bdev_aio_rescan", 00:08:58.985 "bdev_aio_create", 00:08:58.985 "blobfs_create", 00:08:58.985 "blobfs_detect", 00:08:58.985 "blobfs_set_cache_size", 00:08:58.985 "bdev_zone_block_delete", 00:08:58.985 "bdev_zone_block_create", 00:08:58.985 "bdev_delay_delete", 00:08:58.985 "bdev_delay_create", 00:08:58.985 "bdev_delay_update_latency", 00:08:58.985 "bdev_split_delete", 00:08:58.985 "bdev_split_create", 00:08:58.985 "bdev_error_inject_error", 00:08:58.985 "bdev_error_delete", 00:08:58.985 "bdev_error_create", 00:08:58.985 "bdev_raid_set_options", 00:08:58.985 "bdev_raid_remove_base_bdev", 00:08:58.985 "bdev_raid_add_base_bdev", 00:08:58.985 "bdev_raid_delete", 00:08:58.985 "bdev_raid_create", 00:08:58.985 "bdev_raid_get_bdevs", 00:08:58.985 "bdev_lvol_grow_lvstore", 00:08:58.985 "bdev_lvol_get_lvols", 00:08:58.985 "bdev_lvol_get_lvstores", 00:08:58.985 "bdev_lvol_delete", 00:08:58.985 "bdev_lvol_set_read_only", 00:08:58.985 "bdev_lvol_resize", 00:08:58.985 "bdev_lvol_decouple_parent", 00:08:58.985 "bdev_lvol_inflate", 00:08:58.985 "bdev_lvol_rename", 00:08:58.985 "bdev_lvol_clone_bdev", 00:08:58.985 "bdev_lvol_clone", 00:08:58.985 "bdev_lvol_snapshot", 00:08:58.985 "bdev_lvol_create", 00:08:58.985 "bdev_lvol_delete_lvstore", 00:08:58.985 "bdev_lvol_rename_lvstore", 00:08:58.985 "bdev_lvol_create_lvstore", 00:08:58.985 "bdev_passthru_delete", 00:08:58.985 "bdev_passthru_create", 00:08:58.985 "bdev_nvme_cuse_unregister", 00:08:58.985 "bdev_nvme_cuse_register", 00:08:58.985 "bdev_opal_new_user", 00:08:58.985 "bdev_opal_set_lock_state", 00:08:58.985 "bdev_opal_delete", 00:08:58.985 "bdev_opal_get_info", 00:08:58.985 "bdev_opal_create", 00:08:58.985 "bdev_nvme_opal_revert", 00:08:58.985 "bdev_nvme_opal_init", 00:08:58.985 "bdev_nvme_send_cmd", 00:08:58.985 "bdev_nvme_get_path_iostat", 00:08:58.985 "bdev_nvme_get_mdns_discovery_info", 00:08:58.985 "bdev_nvme_stop_mdns_discovery", 00:08:58.985 "bdev_nvme_start_mdns_discovery", 00:08:58.985 "bdev_nvme_set_multipath_policy", 00:08:58.985 "bdev_nvme_set_preferred_path", 00:08:58.985 "bdev_nvme_get_io_paths", 00:08:58.985 "bdev_nvme_remove_error_injection", 00:08:58.985 "bdev_nvme_add_error_injection", 00:08:58.985 "bdev_nvme_get_discovery_info", 00:08:58.985 "bdev_nvme_stop_discovery", 00:08:58.985 "bdev_nvme_start_discovery", 00:08:58.985 "bdev_nvme_get_controller_health_info", 00:08:58.985 "bdev_nvme_disable_controller", 00:08:58.985 "bdev_nvme_enable_controller", 00:08:58.985 "bdev_nvme_reset_controller", 00:08:58.985 "bdev_nvme_get_transport_statistics", 00:08:58.985 "bdev_nvme_apply_firmware", 00:08:58.985 "bdev_nvme_detach_controller", 00:08:58.985 "bdev_nvme_get_controllers", 00:08:58.985 "bdev_nvme_attach_controller", 00:08:58.985 "bdev_nvme_set_hotplug", 00:08:58.985 "bdev_nvme_set_options", 00:08:58.985 "bdev_null_resize", 00:08:58.985 "bdev_null_delete", 00:08:58.985 "bdev_null_create", 00:08:58.985 "bdev_malloc_delete", 00:08:58.985 "bdev_malloc_create" 00:08:58.985 ] 00:08:58.985 05:06:18 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:58.985 05:06:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:58.985 05:06:18 -- common/autotest_common.sh@10 -- # set +x 00:08:59.244 05:06:18 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:59.244 05:06:18 -- spdkcli/tcp.sh@38 -- # killprocess 61447 00:08:59.244 05:06:18 -- common/autotest_common.sh@926 -- # '[' -z 61447 ']' 00:08:59.244 05:06:18 -- common/autotest_common.sh@930 -- # kill -0 61447 00:08:59.244 05:06:18 -- common/autotest_common.sh@931 -- # uname 00:08:59.244 05:06:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:59.244 05:06:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61447 00:08:59.244 05:06:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:59.244 05:06:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:59.244 killing process with pid 61447 00:08:59.244 05:06:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61447' 00:08:59.244 05:06:18 -- common/autotest_common.sh@945 -- # kill 61447 00:08:59.244 05:06:18 -- common/autotest_common.sh@950 -- # wait 61447 00:09:01.147 00:09:01.147 real 0m4.019s 00:09:01.147 user 0m7.479s 00:09:01.147 sys 0m0.560s 00:09:01.147 05:06:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.147 05:06:20 -- common/autotest_common.sh@10 -- # set +x 00:09:01.147 ************************************ 00:09:01.147 END TEST spdkcli_tcp 00:09:01.147 ************************************ 00:09:01.147 05:06:20 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:01.147 05:06:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:01.147 05:06:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:01.147 05:06:20 -- common/autotest_common.sh@10 -- # set +x 00:09:01.147 ************************************ 00:09:01.147 START TEST dpdk_mem_utility 00:09:01.147 ************************************ 00:09:01.147 05:06:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:01.147 * Looking for test storage... 00:09:01.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:01.147 05:06:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:01.147 05:06:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61562 00:09:01.147 05:06:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61562 00:09:01.147 05:06:20 -- common/autotest_common.sh@819 -- # '[' -z 61562 ']' 00:09:01.147 05:06:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:01.147 05:06:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.147 05:06:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:01.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.147 05:06:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.147 05:06:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:01.147 05:06:20 -- common/autotest_common.sh@10 -- # set +x 00:09:01.406 [2024-07-26 05:06:20.271682] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:01.406 [2024-07-26 05:06:20.271849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61562 ] 00:09:01.406 [2024-07-26 05:06:20.440975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.665 [2024-07-26 05:06:20.604282] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:01.665 [2024-07-26 05:06:20.604513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.063 05:06:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:03.063 05:06:21 -- common/autotest_common.sh@852 -- # return 0 00:09:03.063 05:06:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:03.063 05:06:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:03.063 05:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.063 05:06:21 -- common/autotest_common.sh@10 -- # set +x 00:09:03.063 { 00:09:03.063 "filename": "/tmp/spdk_mem_dump.txt" 00:09:03.063 } 00:09:03.063 05:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.063 05:06:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:03.063 DPDK memory size 820.000000 MiB in 1 heap(s) 00:09:03.063 1 heaps totaling size 820.000000 MiB 00:09:03.063 size: 820.000000 MiB heap id: 0 00:09:03.063 end heaps---------- 00:09:03.063 8 mempools totaling size 598.116089 MiB 00:09:03.063 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:03.063 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:03.063 size: 84.521057 MiB name: bdev_io_61562 00:09:03.063 size: 51.011292 MiB name: evtpool_61562 00:09:03.063 size: 50.003479 MiB name: msgpool_61562 00:09:03.063 size: 21.763794 MiB name: PDU_Pool 00:09:03.063 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:03.063 size: 0.026123 MiB name: Session_Pool 00:09:03.063 end mempools------- 00:09:03.063 6 memzones totaling size 4.142822 MiB 00:09:03.063 size: 1.000366 MiB name: RG_ring_0_61562 00:09:03.063 size: 1.000366 MiB name: RG_ring_1_61562 00:09:03.063 size: 1.000366 MiB name: RG_ring_4_61562 00:09:03.063 size: 1.000366 MiB name: RG_ring_5_61562 00:09:03.063 size: 0.125366 MiB name: RG_ring_2_61562 00:09:03.063 size: 0.015991 MiB name: RG_ring_3_61562 00:09:03.063 end memzones------- 00:09:03.063 05:06:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:03.063 heap id: 0 total size: 820.000000 MiB number of busy elements: 304 number of free elements: 18 00:09:03.063 list of free elements. size: 18.450562 MiB 00:09:03.063 element at address: 0x200000400000 with size: 1.999451 MiB 00:09:03.063 element at address: 0x200000800000 with size: 1.996887 MiB 00:09:03.063 element at address: 0x200007000000 with size: 1.995972 MiB 00:09:03.063 element at address: 0x20000b200000 with size: 1.995972 MiB 00:09:03.063 element at address: 0x200019100040 with size: 0.999939 MiB 00:09:03.063 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:03.063 element at address: 0x200019600000 with size: 0.999084 MiB 00:09:03.063 element at address: 0x200003e00000 with size: 0.996094 MiB 00:09:03.063 element at address: 0x200032200000 with size: 0.994324 MiB 00:09:03.063 element at address: 0x200018e00000 with size: 0.959656 MiB 00:09:03.063 element at address: 0x200019900040 with size: 0.936401 MiB 00:09:03.063 element at address: 0x200000200000 with size: 0.829224 MiB 00:09:03.063 element at address: 0x20001b000000 with size: 0.563904 MiB 00:09:03.063 element at address: 0x200019200000 with size: 0.487976 MiB 00:09:03.063 element at address: 0x200019a00000 with size: 0.485413 MiB 00:09:03.063 element at address: 0x200013800000 with size: 0.467651 MiB 00:09:03.063 element at address: 0x200028400000 with size: 0.390442 MiB 00:09:03.063 element at address: 0x200003a00000 with size: 0.352234 MiB 00:09:03.063 list of standard malloc elements. size: 199.285034 MiB 00:09:03.063 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:09:03.063 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:09:03.064 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:09:03.064 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:03.064 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:03.064 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:03.064 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:09:03.064 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:03.064 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:09:03.064 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:09:03.064 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:09:03.064 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003aff980 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003affa80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200003eff000 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200013877b80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200013877c80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200013877d80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200013877e80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200013877f80 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200013878080 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200013878180 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200013878280 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200013878380 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200013878480 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200013878580 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x200019abc680 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:09:03.064 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:09:03.065 element at address: 0x200028463f40 with size: 0.000244 MiB 00:09:03.065 element at address: 0x200028464040 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846af80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846b080 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846b180 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846b280 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846b380 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846b480 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846b580 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846b680 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846b780 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846b880 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846b980 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846be80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846c080 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846c180 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846c280 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846c380 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846c480 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846c580 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846c680 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846c780 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846c880 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846c980 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846d080 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846d180 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846d280 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846d380 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846d480 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846d580 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846d680 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846d780 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846d880 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846d980 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846da80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846db80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846de80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846df80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846e080 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846e180 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846e280 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846e380 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846e480 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846e580 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846e680 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846e780 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846e880 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846e980 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:09:03.065 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846f080 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846f180 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846f280 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846f380 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846f480 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846f580 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846f680 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846f780 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846f880 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846f980 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:09:03.066 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:09:03.066 list of memzone associated elements. size: 602.264404 MiB 00:09:03.066 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:09:03.066 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:03.066 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:09:03.066 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:03.066 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:09:03.066 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61562_0 00:09:03.066 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:09:03.066 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61562_0 00:09:03.066 element at address: 0x200003fff340 with size: 48.003113 MiB 00:09:03.066 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61562_0 00:09:03.066 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:09:03.066 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:03.066 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:09:03.066 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:03.066 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:09:03.066 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61562 00:09:03.066 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:09:03.066 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61562 00:09:03.066 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:03.066 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61562 00:09:03.066 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:03.066 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:03.066 element at address: 0x200019abc780 with size: 1.008179 MiB 00:09:03.066 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:03.066 element at address: 0x200018efde00 with size: 1.008179 MiB 00:09:03.066 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:03.066 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:09:03.066 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:03.066 element at address: 0x200003eff100 with size: 1.000549 MiB 00:09:03.066 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61562 00:09:03.066 element at address: 0x200003affb80 with size: 1.000549 MiB 00:09:03.066 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61562 00:09:03.066 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:09:03.066 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61562 00:09:03.066 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:09:03.066 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61562 00:09:03.066 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:09:03.066 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61562 00:09:03.066 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:09:03.066 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:03.066 element at address: 0x200013878680 with size: 0.500549 MiB 00:09:03.066 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:03.066 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:09:03.066 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:03.066 element at address: 0x200003adf740 with size: 0.125549 MiB 00:09:03.066 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61562 00:09:03.066 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:09:03.066 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:03.066 element at address: 0x200028464140 with size: 0.023804 MiB 00:09:03.066 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:03.066 element at address: 0x200003adb500 with size: 0.016174 MiB 00:09:03.066 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61562 00:09:03.066 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:09:03.066 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:03.066 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:09:03.066 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61562 00:09:03.066 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:09:03.066 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61562 00:09:03.066 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:09:03.066 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:03.066 05:06:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:03.066 05:06:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61562 00:09:03.066 05:06:22 -- common/autotest_common.sh@926 -- # '[' -z 61562 ']' 00:09:03.066 05:06:22 -- common/autotest_common.sh@930 -- # kill -0 61562 00:09:03.066 05:06:22 -- common/autotest_common.sh@931 -- # uname 00:09:03.066 05:06:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:03.066 05:06:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61562 00:09:03.066 05:06:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:03.066 05:06:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:03.066 killing process with pid 61562 00:09:03.066 05:06:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61562' 00:09:03.066 05:06:22 -- common/autotest_common.sh@945 -- # kill 61562 00:09:03.066 05:06:22 -- common/autotest_common.sh@950 -- # wait 61562 00:09:04.968 00:09:04.968 real 0m3.878s 00:09:04.968 user 0m4.191s 00:09:04.968 sys 0m0.502s 00:09:04.968 05:06:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.968 05:06:24 -- common/autotest_common.sh@10 -- # set +x 00:09:04.968 ************************************ 00:09:04.968 END TEST dpdk_mem_utility 00:09:04.968 ************************************ 00:09:04.968 05:06:24 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:04.968 05:06:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:04.968 05:06:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.968 05:06:24 -- common/autotest_common.sh@10 -- # set +x 00:09:04.968 ************************************ 00:09:04.968 START TEST event 00:09:04.968 ************************************ 00:09:04.968 05:06:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:05.226 * Looking for test storage... 00:09:05.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:05.226 05:06:24 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:05.226 05:06:24 -- bdev/nbd_common.sh@6 -- # set -e 00:09:05.226 05:06:24 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:05.226 05:06:24 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:05.226 05:06:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.226 05:06:24 -- common/autotest_common.sh@10 -- # set +x 00:09:05.226 ************************************ 00:09:05.226 START TEST event_perf 00:09:05.226 ************************************ 00:09:05.226 05:06:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:05.226 Running I/O for 1 seconds...[2024-07-26 05:06:24.183100] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:05.226 [2024-07-26 05:06:24.183383] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61663 ] 00:09:05.485 [2024-07-26 05:06:24.354925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.485 [2024-07-26 05:06:24.521055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.485 [2024-07-26 05:06:24.521141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.485 [2024-07-26 05:06:24.521217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.485 Running I/O for 1 seconds...[2024-07-26 05:06:24.521233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.863 00:09:06.863 lcore 0: 204989 00:09:06.863 lcore 1: 204989 00:09:06.863 lcore 2: 204988 00:09:06.863 lcore 3: 204988 00:09:06.863 done. 00:09:06.863 00:09:06.863 real 0m1.746s 00:09:06.863 user 0m4.517s 00:09:06.863 sys 0m0.127s 00:09:06.863 05:06:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.863 ************************************ 00:09:06.863 END TEST event_perf 00:09:06.863 05:06:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.863 ************************************ 00:09:06.863 05:06:25 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:06.863 05:06:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:06.863 05:06:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.863 05:06:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.863 ************************************ 00:09:06.863 START TEST event_reactor 00:09:06.863 ************************************ 00:09:06.863 05:06:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:07.122 [2024-07-26 05:06:25.979385] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:07.122 [2024-07-26 05:06:25.979518] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61697 ] 00:09:07.122 [2024-07-26 05:06:26.154165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.380 [2024-07-26 05:06:26.333992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.757 test_start 00:09:08.757 oneshot 00:09:08.757 tick 100 00:09:08.757 tick 100 00:09:08.757 tick 250 00:09:08.757 tick 100 00:09:08.757 tick 100 00:09:08.757 tick 100 00:09:08.757 tick 250 00:09:08.757 tick 500 00:09:08.757 tick 100 00:09:08.757 tick 100 00:09:08.757 tick 250 00:09:08.757 tick 100 00:09:08.757 tick 100 00:09:08.757 test_end 00:09:08.757 00:09:08.757 real 0m1.762s 00:09:08.757 user 0m1.553s 00:09:08.757 sys 0m0.108s 00:09:08.757 05:06:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.757 05:06:27 -- common/autotest_common.sh@10 -- # set +x 00:09:08.757 ************************************ 00:09:08.757 END TEST event_reactor 00:09:08.757 ************************************ 00:09:08.757 05:06:27 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:08.757 05:06:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:08.757 05:06:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.757 05:06:27 -- common/autotest_common.sh@10 -- # set +x 00:09:08.757 ************************************ 00:09:08.757 START TEST event_reactor_perf 00:09:08.757 ************************************ 00:09:08.757 05:06:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:08.757 [2024-07-26 05:06:27.791930] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:08.757 [2024-07-26 05:06:27.792125] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61739 ] 00:09:09.016 [2024-07-26 05:06:27.963869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.274 [2024-07-26 05:06:28.142606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.685 test_start 00:09:10.685 test_end 00:09:10.685 Performance: 305109 events per second 00:09:10.685 00:09:10.685 real 0m1.775s 00:09:10.685 user 0m1.560s 00:09:10.685 sys 0m0.114s 00:09:10.685 05:06:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.685 05:06:29 -- common/autotest_common.sh@10 -- # set +x 00:09:10.685 ************************************ 00:09:10.685 END TEST event_reactor_perf 00:09:10.685 ************************************ 00:09:10.685 05:06:29 -- event/event.sh@49 -- # uname -s 00:09:10.685 05:06:29 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:10.685 05:06:29 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:10.685 05:06:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:10.685 05:06:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.685 05:06:29 -- common/autotest_common.sh@10 -- # set +x 00:09:10.685 ************************************ 00:09:10.685 START TEST event_scheduler 00:09:10.685 ************************************ 00:09:10.685 05:06:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:10.685 * Looking for test storage... 00:09:10.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:10.685 05:06:29 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:10.685 05:06:29 -- scheduler/scheduler.sh@35 -- # scheduler_pid=61806 00:09:10.685 05:06:29 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:10.685 05:06:29 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:10.685 05:06:29 -- scheduler/scheduler.sh@37 -- # waitforlisten 61806 00:09:10.685 05:06:29 -- common/autotest_common.sh@819 -- # '[' -z 61806 ']' 00:09:10.685 05:06:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.686 05:06:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:10.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.686 05:06:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.686 05:06:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:10.686 05:06:29 -- common/autotest_common.sh@10 -- # set +x 00:09:10.686 [2024-07-26 05:06:29.741224] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:10.686 [2024-07-26 05:06:29.741406] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61806 ] 00:09:10.944 [2024-07-26 05:06:29.917978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.203 [2024-07-26 05:06:30.163526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.203 [2024-07-26 05:06:30.163635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.203 [2024-07-26 05:06:30.163938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.203 [2024-07-26 05:06:30.164274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.770 05:06:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:11.770 05:06:30 -- common/autotest_common.sh@852 -- # return 0 00:09:11.770 05:06:30 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:11.770 05:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.770 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:09:11.770 POWER: Env isn't set yet! 00:09:11.770 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:11.770 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:11.770 POWER: Cannot set governor of lcore 0 to userspace 00:09:11.770 POWER: Attempting to initialise PSTAT power management... 00:09:11.770 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:11.770 POWER: Cannot set governor of lcore 0 to performance 00:09:11.770 POWER: Attempting to initialise AMD PSTATE power management... 00:09:11.770 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:11.770 POWER: Cannot set governor of lcore 0 to userspace 00:09:11.770 POWER: Attempting to initialise CPPC power management... 00:09:11.770 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:11.770 POWER: Cannot set governor of lcore 0 to userspace 00:09:11.770 POWER: Attempting to initialise VM power management... 00:09:11.770 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:11.770 POWER: Unable to set Power Management Environment for lcore 0 00:09:11.770 [2024-07-26 05:06:30.686568] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:11.770 [2024-07-26 05:06:30.686591] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:11.770 [2024-07-26 05:06:30.686607] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:11.770 [2024-07-26 05:06:30.686819] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:11.770 [2024-07-26 05:06:30.686850] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:11.770 [2024-07-26 05:06:30.686863] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:11.770 05:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.770 05:06:30 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:11.770 05:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.770 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 [2024-07-26 05:06:30.939356] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:12.030 05:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:30 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:12.030 05:06:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:12.030 05:06:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.030 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 ************************************ 00:09:12.030 START TEST scheduler_create_thread 00:09:12.030 ************************************ 00:09:12.030 05:06:30 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:09:12.030 05:06:30 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:12.030 05:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 2 00:09:12.030 05:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:30 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:12.030 05:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 3 00:09:12.030 05:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:30 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:12.030 05:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 4 00:09:12.030 05:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:30 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:12.030 05:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 5 00:09:12.030 05:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:30 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:12.030 05:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 6 00:09:12.030 05:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:30 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:12.030 05:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 7 00:09:12.030 05:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:30 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:12.030 05:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:31 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 8 00:09:12.030 05:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:31 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:12.030 05:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:31 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 9 00:09:12.030 05:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:31 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:12.030 05:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:31 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 10 00:09:12.030 05:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:31 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:12.030 05:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:31 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 05:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:31 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:12.030 05:06:31 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:12.030 05:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:31 -- common/autotest_common.sh@10 -- # set +x 00:09:12.030 05:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.030 05:06:31 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:12.030 05:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.030 05:06:31 -- common/autotest_common.sh@10 -- # set +x 00:09:12.966 05:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.966 05:06:32 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:12.966 05:06:32 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:12.966 05:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.966 05:06:32 -- common/autotest_common.sh@10 -- # set +x 00:09:14.342 05:06:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.342 00:09:14.342 real 0m2.134s 00:09:14.342 user 0m0.018s 00:09:14.342 sys 0m0.007s 00:09:14.342 05:06:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.342 05:06:33 -- common/autotest_common.sh@10 -- # set +x 00:09:14.342 ************************************ 00:09:14.342 END TEST scheduler_create_thread 00:09:14.342 ************************************ 00:09:14.342 05:06:33 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:14.342 05:06:33 -- scheduler/scheduler.sh@46 -- # killprocess 61806 00:09:14.342 05:06:33 -- common/autotest_common.sh@926 -- # '[' -z 61806 ']' 00:09:14.342 05:06:33 -- common/autotest_common.sh@930 -- # kill -0 61806 00:09:14.342 05:06:33 -- common/autotest_common.sh@931 -- # uname 00:09:14.342 05:06:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:14.342 05:06:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61806 00:09:14.342 05:06:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:14.342 05:06:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:14.342 killing process with pid 61806 00:09:14.342 05:06:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61806' 00:09:14.342 05:06:33 -- common/autotest_common.sh@945 -- # kill 61806 00:09:14.342 05:06:33 -- common/autotest_common.sh@950 -- # wait 61806 00:09:14.600 [2024-07-26 05:06:33.563617] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:15.536 00:09:15.536 real 0m5.021s 00:09:15.536 user 0m8.270s 00:09:15.536 sys 0m0.501s 00:09:15.536 05:06:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.536 05:06:34 -- common/autotest_common.sh@10 -- # set +x 00:09:15.536 ************************************ 00:09:15.536 END TEST event_scheduler 00:09:15.536 ************************************ 00:09:15.795 05:06:34 -- event/event.sh@51 -- # modprobe -n nbd 00:09:15.795 05:06:34 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:15.795 05:06:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:15.795 05:06:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:15.795 05:06:34 -- common/autotest_common.sh@10 -- # set +x 00:09:15.795 ************************************ 00:09:15.795 START TEST app_repeat 00:09:15.795 ************************************ 00:09:15.795 05:06:34 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:09:15.795 05:06:34 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.795 05:06:34 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.795 05:06:34 -- event/event.sh@13 -- # local nbd_list 00:09:15.795 05:06:34 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:15.795 05:06:34 -- event/event.sh@14 -- # local bdev_list 00:09:15.795 05:06:34 -- event/event.sh@15 -- # local repeat_times=4 00:09:15.795 05:06:34 -- event/event.sh@17 -- # modprobe nbd 00:09:15.795 05:06:34 -- event/event.sh@19 -- # repeat_pid=61907 00:09:15.795 05:06:34 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:15.795 05:06:34 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:15.795 Process app_repeat pid: 61907 00:09:15.795 05:06:34 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61907' 00:09:15.795 05:06:34 -- event/event.sh@23 -- # for i in {0..2} 00:09:15.795 spdk_app_start Round 0 00:09:15.795 05:06:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:15.795 05:06:34 -- event/event.sh@25 -- # waitforlisten 61907 /var/tmp/spdk-nbd.sock 00:09:15.795 05:06:34 -- common/autotest_common.sh@819 -- # '[' -z 61907 ']' 00:09:15.795 05:06:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:15.795 05:06:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:15.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:15.795 05:06:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:15.795 05:06:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:15.795 05:06:34 -- common/autotest_common.sh@10 -- # set +x 00:09:15.795 [2024-07-26 05:06:34.725776] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:15.795 [2024-07-26 05:06:34.725965] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61907 ] 00:09:15.795 [2024-07-26 05:06:34.902993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.053 [2024-07-26 05:06:35.141354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.053 [2024-07-26 05:06:35.141367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.621 05:06:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:16.621 05:06:35 -- common/autotest_common.sh@852 -- # return 0 00:09:16.621 05:06:35 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:16.879 Malloc0 00:09:16.879 05:06:35 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:17.138 Malloc1 00:09:17.138 05:06:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@12 -- # local i 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:17.138 05:06:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:17.397 /dev/nbd0 00:09:17.397 05:06:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:17.397 05:06:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:17.397 05:06:36 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:17.397 05:06:36 -- common/autotest_common.sh@857 -- # local i 00:09:17.397 05:06:36 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:17.397 05:06:36 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:17.397 05:06:36 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:17.397 05:06:36 -- common/autotest_common.sh@861 -- # break 00:09:17.397 05:06:36 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:17.397 05:06:36 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:17.398 05:06:36 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:17.398 1+0 records in 00:09:17.398 1+0 records out 00:09:17.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290703 s, 14.1 MB/s 00:09:17.398 05:06:36 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:17.398 05:06:36 -- common/autotest_common.sh@874 -- # size=4096 00:09:17.398 05:06:36 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:17.398 05:06:36 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:17.398 05:06:36 -- common/autotest_common.sh@877 -- # return 0 00:09:17.398 05:06:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.398 05:06:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:17.398 05:06:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:17.656 /dev/nbd1 00:09:17.656 05:06:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:17.915 05:06:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:17.915 05:06:36 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:17.915 05:06:36 -- common/autotest_common.sh@857 -- # local i 00:09:17.915 05:06:36 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:17.915 05:06:36 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:17.915 05:06:36 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:17.915 05:06:36 -- common/autotest_common.sh@861 -- # break 00:09:17.915 05:06:36 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:17.915 05:06:36 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:17.915 05:06:36 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:17.915 1+0 records in 00:09:17.915 1+0 records out 00:09:17.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267924 s, 15.3 MB/s 00:09:17.915 05:06:36 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:17.915 05:06:36 -- common/autotest_common.sh@874 -- # size=4096 00:09:17.915 05:06:36 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:17.915 05:06:36 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:17.915 05:06:36 -- common/autotest_common.sh@877 -- # return 0 00:09:17.915 05:06:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.915 05:06:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:17.915 05:06:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:17.915 05:06:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.915 05:06:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:18.174 { 00:09:18.174 "nbd_device": "/dev/nbd0", 00:09:18.174 "bdev_name": "Malloc0" 00:09:18.174 }, 00:09:18.174 { 00:09:18.174 "nbd_device": "/dev/nbd1", 00:09:18.174 "bdev_name": "Malloc1" 00:09:18.174 } 00:09:18.174 ]' 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:18.174 { 00:09:18.174 "nbd_device": "/dev/nbd0", 00:09:18.174 "bdev_name": "Malloc0" 00:09:18.174 }, 00:09:18.174 { 00:09:18.174 "nbd_device": "/dev/nbd1", 00:09:18.174 "bdev_name": "Malloc1" 00:09:18.174 } 00:09:18.174 ]' 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:18.174 /dev/nbd1' 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:18.174 /dev/nbd1' 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@65 -- # count=2 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@95 -- # count=2 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:18.174 256+0 records in 00:09:18.174 256+0 records out 00:09:18.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00897192 s, 117 MB/s 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:18.174 256+0 records in 00:09:18.174 256+0 records out 00:09:18.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283404 s, 37.0 MB/s 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:18.174 256+0 records in 00:09:18.174 256+0 records out 00:09:18.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0381173 s, 27.5 MB/s 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@51 -- # local i 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.174 05:06:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:18.433 05:06:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:18.433 05:06:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:18.433 05:06:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:18.433 05:06:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.433 05:06:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.433 05:06:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:18.433 05:06:37 -- bdev/nbd_common.sh@41 -- # break 00:09:18.433 05:06:37 -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.433 05:06:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.433 05:06:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:18.692 05:06:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:18.692 05:06:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:18.692 05:06:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:18.692 05:06:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.692 05:06:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.692 05:06:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:18.692 05:06:37 -- bdev/nbd_common.sh@41 -- # break 00:09:18.692 05:06:37 -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.692 05:06:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:18.692 05:06:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.692 05:06:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@65 -- # true 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@65 -- # count=0 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@104 -- # count=0 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:18.951 05:06:37 -- bdev/nbd_common.sh@109 -- # return 0 00:09:18.951 05:06:37 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:19.210 05:06:38 -- event/event.sh@35 -- # sleep 3 00:09:20.636 [2024-07-26 05:06:39.501733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:20.636 [2024-07-26 05:06:39.660068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.636 [2024-07-26 05:06:39.660070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.895 [2024-07-26 05:06:39.828679] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:20.895 [2024-07-26 05:06:39.828762] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:22.270 05:06:41 -- event/event.sh@23 -- # for i in {0..2} 00:09:22.270 spdk_app_start Round 1 00:09:22.270 05:06:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:22.270 05:06:41 -- event/event.sh@25 -- # waitforlisten 61907 /var/tmp/spdk-nbd.sock 00:09:22.270 05:06:41 -- common/autotest_common.sh@819 -- # '[' -z 61907 ']' 00:09:22.270 05:06:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:22.270 05:06:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:22.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:22.270 05:06:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:22.270 05:06:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:22.270 05:06:41 -- common/autotest_common.sh@10 -- # set +x 00:09:22.528 05:06:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:22.528 05:06:41 -- common/autotest_common.sh@852 -- # return 0 00:09:22.528 05:06:41 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:22.787 Malloc0 00:09:22.787 05:06:41 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:23.046 Malloc1 00:09:23.046 05:06:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@12 -- # local i 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:23.046 05:06:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:23.304 /dev/nbd0 00:09:23.304 05:06:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:23.304 05:06:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:23.304 05:06:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:23.304 05:06:42 -- common/autotest_common.sh@857 -- # local i 00:09:23.304 05:06:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:23.304 05:06:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:23.304 05:06:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:23.304 05:06:42 -- common/autotest_common.sh@861 -- # break 00:09:23.304 05:06:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:23.304 05:06:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:23.304 05:06:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:23.304 1+0 records in 00:09:23.304 1+0 records out 00:09:23.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199834 s, 20.5 MB/s 00:09:23.304 05:06:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:23.304 05:06:42 -- common/autotest_common.sh@874 -- # size=4096 00:09:23.304 05:06:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:23.304 05:06:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:23.304 05:06:42 -- common/autotest_common.sh@877 -- # return 0 00:09:23.304 05:06:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:23.304 05:06:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:23.304 05:06:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:23.563 /dev/nbd1 00:09:23.563 05:06:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:23.563 05:06:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:23.563 05:06:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:23.563 05:06:42 -- common/autotest_common.sh@857 -- # local i 00:09:23.563 05:06:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:23.563 05:06:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:23.563 05:06:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:23.563 05:06:42 -- common/autotest_common.sh@861 -- # break 00:09:23.563 05:06:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:23.563 05:06:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:23.563 05:06:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:23.563 1+0 records in 00:09:23.563 1+0 records out 00:09:23.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326412 s, 12.5 MB/s 00:09:23.563 05:06:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:23.563 05:06:42 -- common/autotest_common.sh@874 -- # size=4096 00:09:23.563 05:06:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:23.563 05:06:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:23.563 05:06:42 -- common/autotest_common.sh@877 -- # return 0 00:09:23.563 05:06:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:23.563 05:06:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:23.563 05:06:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:23.563 05:06:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.563 05:06:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:23.822 { 00:09:23.822 "nbd_device": "/dev/nbd0", 00:09:23.822 "bdev_name": "Malloc0" 00:09:23.822 }, 00:09:23.822 { 00:09:23.822 "nbd_device": "/dev/nbd1", 00:09:23.822 "bdev_name": "Malloc1" 00:09:23.822 } 00:09:23.822 ]' 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:23.822 { 00:09:23.822 "nbd_device": "/dev/nbd0", 00:09:23.822 "bdev_name": "Malloc0" 00:09:23.822 }, 00:09:23.822 { 00:09:23.822 "nbd_device": "/dev/nbd1", 00:09:23.822 "bdev_name": "Malloc1" 00:09:23.822 } 00:09:23.822 ]' 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:23.822 /dev/nbd1' 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:23.822 /dev/nbd1' 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@65 -- # count=2 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@95 -- # count=2 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:23.822 256+0 records in 00:09:23.822 256+0 records out 00:09:23.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551494 s, 190 MB/s 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:23.822 256+0 records in 00:09:23.822 256+0 records out 00:09:23.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267279 s, 39.2 MB/s 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.822 05:06:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:24.081 256+0 records in 00:09:24.081 256+0 records out 00:09:24.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0385759 s, 27.2 MB/s 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@51 -- # local i 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.081 05:06:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:24.340 05:06:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:24.340 05:06:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:24.340 05:06:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:24.340 05:06:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.340 05:06:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.340 05:06:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:24.340 05:06:43 -- bdev/nbd_common.sh@41 -- # break 00:09:24.340 05:06:43 -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.340 05:06:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.340 05:06:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:24.599 05:06:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:24.599 05:06:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:24.599 05:06:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:24.599 05:06:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.599 05:06:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.599 05:06:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:24.599 05:06:43 -- bdev/nbd_common.sh@41 -- # break 00:09:24.599 05:06:43 -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.599 05:06:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:24.599 05:06:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.599 05:06:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@65 -- # true 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@65 -- # count=0 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@104 -- # count=0 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:24.858 05:06:43 -- bdev/nbd_common.sh@109 -- # return 0 00:09:24.858 05:06:43 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:25.117 05:06:44 -- event/event.sh@35 -- # sleep 3 00:09:26.492 [2024-07-26 05:06:45.309983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:26.492 [2024-07-26 05:06:45.487194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.492 [2024-07-26 05:06:45.487197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.751 [2024-07-26 05:06:45.664211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:26.751 [2024-07-26 05:06:45.664282] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:28.124 05:06:47 -- event/event.sh@23 -- # for i in {0..2} 00:09:28.124 spdk_app_start Round 2 00:09:28.124 05:06:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:28.124 05:06:47 -- event/event.sh@25 -- # waitforlisten 61907 /var/tmp/spdk-nbd.sock 00:09:28.124 05:06:47 -- common/autotest_common.sh@819 -- # '[' -z 61907 ']' 00:09:28.124 05:06:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:28.124 05:06:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:28.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:28.125 05:06:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:28.125 05:06:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:28.125 05:06:47 -- common/autotest_common.sh@10 -- # set +x 00:09:28.394 05:06:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:28.394 05:06:47 -- common/autotest_common.sh@852 -- # return 0 00:09:28.394 05:06:47 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:28.687 Malloc0 00:09:28.687 05:06:47 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:28.945 Malloc1 00:09:28.946 05:06:47 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@12 -- # local i 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.946 05:06:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:29.204 /dev/nbd0 00:09:29.204 05:06:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:29.204 05:06:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:29.204 05:06:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:29.204 05:06:48 -- common/autotest_common.sh@857 -- # local i 00:09:29.204 05:06:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:29.204 05:06:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:29.204 05:06:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:29.204 05:06:48 -- common/autotest_common.sh@861 -- # break 00:09:29.204 05:06:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:29.204 05:06:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:29.204 05:06:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:29.204 1+0 records in 00:09:29.204 1+0 records out 00:09:29.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260881 s, 15.7 MB/s 00:09:29.204 05:06:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:29.204 05:06:48 -- common/autotest_common.sh@874 -- # size=4096 00:09:29.204 05:06:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:29.204 05:06:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:29.204 05:06:48 -- common/autotest_common.sh@877 -- # return 0 00:09:29.204 05:06:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.204 05:06:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:29.204 05:06:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:29.462 /dev/nbd1 00:09:29.462 05:06:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:29.462 05:06:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:29.462 05:06:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:29.462 05:06:48 -- common/autotest_common.sh@857 -- # local i 00:09:29.462 05:06:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:29.462 05:06:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:29.462 05:06:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:29.462 05:06:48 -- common/autotest_common.sh@861 -- # break 00:09:29.462 05:06:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:29.462 05:06:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:29.462 05:06:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:29.462 1+0 records in 00:09:29.462 1+0 records out 00:09:29.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233973 s, 17.5 MB/s 00:09:29.462 05:06:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:29.462 05:06:48 -- common/autotest_common.sh@874 -- # size=4096 00:09:29.462 05:06:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:29.462 05:06:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:29.462 05:06:48 -- common/autotest_common.sh@877 -- # return 0 00:09:29.462 05:06:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.462 05:06:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:29.462 05:06:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:29.462 05:06:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.462 05:06:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.720 05:06:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:29.720 { 00:09:29.720 "nbd_device": "/dev/nbd0", 00:09:29.720 "bdev_name": "Malloc0" 00:09:29.720 }, 00:09:29.720 { 00:09:29.720 "nbd_device": "/dev/nbd1", 00:09:29.720 "bdev_name": "Malloc1" 00:09:29.720 } 00:09:29.720 ]' 00:09:29.720 05:06:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.720 05:06:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:29.720 { 00:09:29.720 "nbd_device": "/dev/nbd0", 00:09:29.720 "bdev_name": "Malloc0" 00:09:29.720 }, 00:09:29.720 { 00:09:29.721 "nbd_device": "/dev/nbd1", 00:09:29.721 "bdev_name": "Malloc1" 00:09:29.721 } 00:09:29.721 ]' 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:29.721 /dev/nbd1' 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:29.721 /dev/nbd1' 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@65 -- # count=2 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@95 -- # count=2 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:29.721 256+0 records in 00:09:29.721 256+0 records out 00:09:29.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100976 s, 104 MB/s 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:29.721 256+0 records in 00:09:29.721 256+0 records out 00:09:29.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278188 s, 37.7 MB/s 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.721 05:06:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:29.978 256+0 records in 00:09:29.978 256+0 records out 00:09:29.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295488 s, 35.5 MB/s 00:09:29.978 05:06:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:29.978 05:06:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@51 -- # local i 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.979 05:06:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:30.236 05:06:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:30.236 05:06:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:30.236 05:06:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:30.236 05:06:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.236 05:06:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.236 05:06:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:30.236 05:06:49 -- bdev/nbd_common.sh@41 -- # break 00:09:30.236 05:06:49 -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.236 05:06:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.236 05:06:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:30.494 05:06:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:30.494 05:06:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:30.494 05:06:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:30.494 05:06:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.494 05:06:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.494 05:06:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:30.494 05:06:49 -- bdev/nbd_common.sh@41 -- # break 00:09:30.494 05:06:49 -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.494 05:06:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:30.494 05:06:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.494 05:06:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@65 -- # true 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@65 -- # count=0 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@104 -- # count=0 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:30.752 05:06:49 -- bdev/nbd_common.sh@109 -- # return 0 00:09:30.752 05:06:49 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:31.010 05:06:50 -- event/event.sh@35 -- # sleep 3 00:09:32.385 [2024-07-26 05:06:51.129780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:32.385 [2024-07-26 05:06:51.304489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.385 [2024-07-26 05:06:51.304500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.385 [2024-07-26 05:06:51.471445] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:32.385 [2024-07-26 05:06:51.471516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:34.288 05:06:53 -- event/event.sh@38 -- # waitforlisten 61907 /var/tmp/spdk-nbd.sock 00:09:34.288 05:06:53 -- common/autotest_common.sh@819 -- # '[' -z 61907 ']' 00:09:34.288 05:06:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:34.288 05:06:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:34.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:34.288 05:06:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:34.288 05:06:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:34.288 05:06:53 -- common/autotest_common.sh@10 -- # set +x 00:09:34.288 05:06:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.288 05:06:53 -- common/autotest_common.sh@852 -- # return 0 00:09:34.288 05:06:53 -- event/event.sh@39 -- # killprocess 61907 00:09:34.288 05:06:53 -- common/autotest_common.sh@926 -- # '[' -z 61907 ']' 00:09:34.288 05:06:53 -- common/autotest_common.sh@930 -- # kill -0 61907 00:09:34.288 05:06:53 -- common/autotest_common.sh@931 -- # uname 00:09:34.288 05:06:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:34.288 05:06:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61907 00:09:34.288 05:06:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:34.288 05:06:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:34.288 killing process with pid 61907 00:09:34.288 05:06:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61907' 00:09:34.288 05:06:53 -- common/autotest_common.sh@945 -- # kill 61907 00:09:34.288 05:06:53 -- common/autotest_common.sh@950 -- # wait 61907 00:09:35.662 spdk_app_start is called in Round 0. 00:09:35.662 Shutdown signal received, stop current app iteration 00:09:35.662 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:09:35.662 spdk_app_start is called in Round 1. 00:09:35.662 Shutdown signal received, stop current app iteration 00:09:35.662 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:09:35.662 spdk_app_start is called in Round 2. 00:09:35.662 Shutdown signal received, stop current app iteration 00:09:35.662 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:09:35.662 spdk_app_start is called in Round 3. 00:09:35.662 Shutdown signal received, stop current app iteration 00:09:35.662 05:06:54 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:35.662 05:06:54 -- event/event.sh@42 -- # return 0 00:09:35.662 00:09:35.662 real 0m19.724s 00:09:35.662 user 0m41.983s 00:09:35.662 sys 0m2.746s 00:09:35.662 05:06:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.662 05:06:54 -- common/autotest_common.sh@10 -- # set +x 00:09:35.662 ************************************ 00:09:35.662 END TEST app_repeat 00:09:35.662 ************************************ 00:09:35.663 05:06:54 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:35.663 05:06:54 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:35.663 05:06:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:35.663 05:06:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.663 05:06:54 -- common/autotest_common.sh@10 -- # set +x 00:09:35.663 ************************************ 00:09:35.663 START TEST cpu_locks 00:09:35.663 ************************************ 00:09:35.663 05:06:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:35.663 * Looking for test storage... 00:09:35.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:35.663 05:06:54 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:35.663 05:06:54 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:35.663 05:06:54 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:35.663 05:06:54 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:35.663 05:06:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:35.663 05:06:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.663 05:06:54 -- common/autotest_common.sh@10 -- # set +x 00:09:35.663 ************************************ 00:09:35.663 START TEST default_locks 00:09:35.663 ************************************ 00:09:35.663 05:06:54 -- common/autotest_common.sh@1104 -- # default_locks 00:09:35.663 05:06:54 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62394 00:09:35.663 05:06:54 -- event/cpu_locks.sh@47 -- # waitforlisten 62394 00:09:35.663 05:06:54 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:35.663 05:06:54 -- common/autotest_common.sh@819 -- # '[' -z 62394 ']' 00:09:35.663 05:06:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.663 05:06:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:35.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.663 05:06:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.663 05:06:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:35.663 05:06:54 -- common/autotest_common.sh@10 -- # set +x 00:09:35.663 [2024-07-26 05:06:54.616297] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:35.663 [2024-07-26 05:06:54.616462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62394 ] 00:09:35.921 [2024-07-26 05:06:54.789274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.921 [2024-07-26 05:06:54.979372] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:35.921 [2024-07-26 05:06:54.979646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.359 05:06:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:37.359 05:06:56 -- common/autotest_common.sh@852 -- # return 0 00:09:37.359 05:06:56 -- event/cpu_locks.sh@49 -- # locks_exist 62394 00:09:37.359 05:06:56 -- event/cpu_locks.sh@22 -- # lslocks -p 62394 00:09:37.359 05:06:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:37.616 05:06:56 -- event/cpu_locks.sh@50 -- # killprocess 62394 00:09:37.616 05:06:56 -- common/autotest_common.sh@926 -- # '[' -z 62394 ']' 00:09:37.616 05:06:56 -- common/autotest_common.sh@930 -- # kill -0 62394 00:09:37.616 05:06:56 -- common/autotest_common.sh@931 -- # uname 00:09:37.616 05:06:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:37.874 05:06:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62394 00:09:37.874 05:06:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:37.874 05:06:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:37.874 killing process with pid 62394 00:09:37.874 05:06:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62394' 00:09:37.874 05:06:56 -- common/autotest_common.sh@945 -- # kill 62394 00:09:37.874 05:06:56 -- common/autotest_common.sh@950 -- # wait 62394 00:09:40.404 05:06:58 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62394 00:09:40.404 05:06:58 -- common/autotest_common.sh@640 -- # local es=0 00:09:40.404 05:06:58 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 62394 00:09:40.404 05:06:58 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:40.404 05:06:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:40.404 05:06:58 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:40.404 05:06:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:40.404 05:06:58 -- common/autotest_common.sh@643 -- # waitforlisten 62394 00:09:40.404 05:06:58 -- common/autotest_common.sh@819 -- # '[' -z 62394 ']' 00:09:40.404 05:06:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.404 05:06:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:40.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.404 05:06:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.404 05:06:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:40.404 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:09:40.404 ERROR: process (pid: 62394) is no longer running 00:09:40.404 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (62394) - No such process 00:09:40.404 05:06:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:40.404 05:06:58 -- common/autotest_common.sh@852 -- # return 1 00:09:40.404 05:06:58 -- common/autotest_common.sh@643 -- # es=1 00:09:40.404 05:06:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:40.404 05:06:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:40.404 05:06:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:40.404 05:06:58 -- event/cpu_locks.sh@54 -- # no_locks 00:09:40.404 05:06:58 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:40.404 05:06:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:40.404 05:06:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:40.404 00:09:40.404 real 0m4.371s 00:09:40.404 user 0m4.633s 00:09:40.404 sys 0m0.690s 00:09:40.404 05:06:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.404 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:09:40.404 ************************************ 00:09:40.404 END TEST default_locks 00:09:40.404 ************************************ 00:09:40.404 05:06:58 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:40.404 05:06:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:40.404 05:06:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.404 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:09:40.404 ************************************ 00:09:40.404 START TEST default_locks_via_rpc 00:09:40.404 ************************************ 00:09:40.404 05:06:58 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:09:40.404 05:06:58 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62476 00:09:40.404 05:06:58 -- event/cpu_locks.sh@63 -- # waitforlisten 62476 00:09:40.404 05:06:58 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:40.404 05:06:58 -- common/autotest_common.sh@819 -- # '[' -z 62476 ']' 00:09:40.404 05:06:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.404 05:06:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:40.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.404 05:06:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.404 05:06:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:40.404 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:09:40.404 [2024-07-26 05:06:59.035151] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:40.404 [2024-07-26 05:06:59.035915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62476 ] 00:09:40.404 [2024-07-26 05:06:59.205650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.404 [2024-07-26 05:06:59.395793] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:40.404 [2024-07-26 05:06:59.396062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.778 05:07:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:41.778 05:07:00 -- common/autotest_common.sh@852 -- # return 0 00:09:41.778 05:07:00 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:41.778 05:07:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:41.778 05:07:00 -- common/autotest_common.sh@10 -- # set +x 00:09:41.778 05:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:41.778 05:07:00 -- event/cpu_locks.sh@67 -- # no_locks 00:09:41.778 05:07:00 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:41.778 05:07:00 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:41.778 05:07:00 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:41.778 05:07:00 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:41.778 05:07:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:41.778 05:07:00 -- common/autotest_common.sh@10 -- # set +x 00:09:41.778 05:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:41.778 05:07:00 -- event/cpu_locks.sh@71 -- # locks_exist 62476 00:09:41.778 05:07:00 -- event/cpu_locks.sh@22 -- # lslocks -p 62476 00:09:41.778 05:07:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:42.035 05:07:01 -- event/cpu_locks.sh@73 -- # killprocess 62476 00:09:42.035 05:07:01 -- common/autotest_common.sh@926 -- # '[' -z 62476 ']' 00:09:42.035 05:07:01 -- common/autotest_common.sh@930 -- # kill -0 62476 00:09:42.035 05:07:01 -- common/autotest_common.sh@931 -- # uname 00:09:42.035 05:07:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:42.035 05:07:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62476 00:09:42.035 killing process with pid 62476 00:09:42.035 05:07:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:42.035 05:07:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:42.035 05:07:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62476' 00:09:42.035 05:07:01 -- common/autotest_common.sh@945 -- # kill 62476 00:09:42.035 05:07:01 -- common/autotest_common.sh@950 -- # wait 62476 00:09:44.564 ************************************ 00:09:44.564 END TEST default_locks_via_rpc 00:09:44.564 ************************************ 00:09:44.564 00:09:44.564 real 0m4.220s 00:09:44.564 user 0m4.493s 00:09:44.564 sys 0m0.644s 00:09:44.564 05:07:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.564 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:09:44.564 05:07:03 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:44.564 05:07:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:44.564 05:07:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.564 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:09:44.564 ************************************ 00:09:44.564 START TEST non_locking_app_on_locked_coremask 00:09:44.564 ************************************ 00:09:44.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.564 05:07:03 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:09:44.564 05:07:03 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62552 00:09:44.564 05:07:03 -- event/cpu_locks.sh@81 -- # waitforlisten 62552 /var/tmp/spdk.sock 00:09:44.564 05:07:03 -- common/autotest_common.sh@819 -- # '[' -z 62552 ']' 00:09:44.564 05:07:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.564 05:07:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:44.564 05:07:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.564 05:07:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:44.564 05:07:03 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:44.564 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:09:44.564 [2024-07-26 05:07:03.320237] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:44.564 [2024-07-26 05:07:03.320474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62552 ] 00:09:44.564 [2024-07-26 05:07:03.502174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.823 [2024-07-26 05:07:03.691006] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:44.823 [2024-07-26 05:07:03.691499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:46.202 05:07:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:46.202 05:07:05 -- common/autotest_common.sh@852 -- # return 0 00:09:46.202 05:07:05 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62576 00:09:46.202 05:07:05 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:46.202 05:07:05 -- event/cpu_locks.sh@85 -- # waitforlisten 62576 /var/tmp/spdk2.sock 00:09:46.202 05:07:05 -- common/autotest_common.sh@819 -- # '[' -z 62576 ']' 00:09:46.202 05:07:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:46.202 05:07:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:46.202 05:07:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:46.203 05:07:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:46.203 05:07:05 -- common/autotest_common.sh@10 -- # set +x 00:09:46.203 [2024-07-26 05:07:05.068610] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:46.203 [2024-07-26 05:07:05.068906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62576 ] 00:09:46.203 [2024-07-26 05:07:05.236456] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:46.203 [2024-07-26 05:07:05.236543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.773 [2024-07-26 05:07:05.609855] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:46.773 [2024-07-26 05:07:05.613149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.676 05:07:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:48.676 05:07:07 -- common/autotest_common.sh@852 -- # return 0 00:09:48.676 05:07:07 -- event/cpu_locks.sh@87 -- # locks_exist 62552 00:09:48.676 05:07:07 -- event/cpu_locks.sh@22 -- # lslocks -p 62552 00:09:48.676 05:07:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:49.242 05:07:08 -- event/cpu_locks.sh@89 -- # killprocess 62552 00:09:49.242 05:07:08 -- common/autotest_common.sh@926 -- # '[' -z 62552 ']' 00:09:49.242 05:07:08 -- common/autotest_common.sh@930 -- # kill -0 62552 00:09:49.242 05:07:08 -- common/autotest_common.sh@931 -- # uname 00:09:49.242 05:07:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:49.242 05:07:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62552 00:09:49.242 killing process with pid 62552 00:09:49.242 05:07:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:49.242 05:07:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:49.242 05:07:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62552' 00:09:49.242 05:07:08 -- common/autotest_common.sh@945 -- # kill 62552 00:09:49.242 05:07:08 -- common/autotest_common.sh@950 -- # wait 62552 00:09:53.429 05:07:12 -- event/cpu_locks.sh@90 -- # killprocess 62576 00:09:53.429 05:07:12 -- common/autotest_common.sh@926 -- # '[' -z 62576 ']' 00:09:53.429 05:07:12 -- common/autotest_common.sh@930 -- # kill -0 62576 00:09:53.429 05:07:12 -- common/autotest_common.sh@931 -- # uname 00:09:53.429 05:07:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:53.429 05:07:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62576 00:09:53.429 killing process with pid 62576 00:09:53.429 05:07:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:53.429 05:07:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:53.429 05:07:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62576' 00:09:53.429 05:07:12 -- common/autotest_common.sh@945 -- # kill 62576 00:09:53.429 05:07:12 -- common/autotest_common.sh@950 -- # wait 62576 00:09:55.330 ************************************ 00:09:55.330 END TEST non_locking_app_on_locked_coremask 00:09:55.330 ************************************ 00:09:55.330 00:09:55.330 real 0m10.933s 00:09:55.330 user 0m11.796s 00:09:55.330 sys 0m1.452s 00:09:55.330 05:07:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.330 05:07:14 -- common/autotest_common.sh@10 -- # set +x 00:09:55.330 05:07:14 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:55.330 05:07:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:55.330 05:07:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.330 05:07:14 -- common/autotest_common.sh@10 -- # set +x 00:09:55.330 ************************************ 00:09:55.330 START TEST locking_app_on_unlocked_coremask 00:09:55.330 ************************************ 00:09:55.330 05:07:14 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:09:55.330 05:07:14 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62715 00:09:55.330 05:07:14 -- event/cpu_locks.sh@99 -- # waitforlisten 62715 /var/tmp/spdk.sock 00:09:55.330 05:07:14 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:55.330 05:07:14 -- common/autotest_common.sh@819 -- # '[' -z 62715 ']' 00:09:55.330 05:07:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.330 05:07:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:55.330 05:07:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.330 05:07:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:55.330 05:07:14 -- common/autotest_common.sh@10 -- # set +x 00:09:55.330 [2024-07-26 05:07:14.300564] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:55.330 [2024-07-26 05:07:14.300721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62715 ] 00:09:55.605 [2024-07-26 05:07:14.472585] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:55.605 [2024-07-26 05:07:14.472636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.605 [2024-07-26 05:07:14.632467] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:55.605 [2024-07-26 05:07:14.632688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:56.980 05:07:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:56.980 05:07:15 -- common/autotest_common.sh@852 -- # return 0 00:09:56.980 05:07:15 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62733 00:09:56.980 05:07:15 -- event/cpu_locks.sh@103 -- # waitforlisten 62733 /var/tmp/spdk2.sock 00:09:56.980 05:07:15 -- common/autotest_common.sh@819 -- # '[' -z 62733 ']' 00:09:56.980 05:07:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:56.980 05:07:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:56.980 05:07:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:56.980 05:07:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:56.980 05:07:15 -- common/autotest_common.sh@10 -- # set +x 00:09:56.980 05:07:15 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:56.980 [2024-07-26 05:07:15.976676] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:56.980 [2024-07-26 05:07:15.976834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62733 ] 00:09:57.239 [2024-07-26 05:07:16.154888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.499 [2024-07-26 05:07:16.487459] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.499 [2024-07-26 05:07:16.487700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.402 05:07:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:59.402 05:07:18 -- common/autotest_common.sh@852 -- # return 0 00:09:59.402 05:07:18 -- event/cpu_locks.sh@105 -- # locks_exist 62733 00:09:59.402 05:07:18 -- event/cpu_locks.sh@22 -- # lslocks -p 62733 00:09:59.402 05:07:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:00.339 05:07:19 -- event/cpu_locks.sh@107 -- # killprocess 62715 00:10:00.339 05:07:19 -- common/autotest_common.sh@926 -- # '[' -z 62715 ']' 00:10:00.339 05:07:19 -- common/autotest_common.sh@930 -- # kill -0 62715 00:10:00.339 05:07:19 -- common/autotest_common.sh@931 -- # uname 00:10:00.339 05:07:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:00.339 05:07:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62715 00:10:00.339 killing process with pid 62715 00:10:00.339 05:07:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:00.339 05:07:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:00.339 05:07:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62715' 00:10:00.339 05:07:19 -- common/autotest_common.sh@945 -- # kill 62715 00:10:00.339 05:07:19 -- common/autotest_common.sh@950 -- # wait 62715 00:10:04.528 05:07:22 -- event/cpu_locks.sh@108 -- # killprocess 62733 00:10:04.528 05:07:22 -- common/autotest_common.sh@926 -- # '[' -z 62733 ']' 00:10:04.528 05:07:22 -- common/autotest_common.sh@930 -- # kill -0 62733 00:10:04.528 05:07:22 -- common/autotest_common.sh@931 -- # uname 00:10:04.528 05:07:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:04.528 05:07:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62733 00:10:04.528 killing process with pid 62733 00:10:04.528 05:07:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:04.528 05:07:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:04.528 05:07:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62733' 00:10:04.528 05:07:23 -- common/autotest_common.sh@945 -- # kill 62733 00:10:04.528 05:07:23 -- common/autotest_common.sh@950 -- # wait 62733 00:10:05.905 00:10:05.905 real 0m10.654s 00:10:05.905 user 0m11.474s 00:10:05.905 sys 0m1.403s 00:10:05.905 ************************************ 00:10:05.905 END TEST locking_app_on_unlocked_coremask 00:10:05.905 ************************************ 00:10:05.905 05:07:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.905 05:07:24 -- common/autotest_common.sh@10 -- # set +x 00:10:05.905 05:07:24 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:05.905 05:07:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:05.905 05:07:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:05.905 05:07:24 -- common/autotest_common.sh@10 -- # set +x 00:10:05.905 ************************************ 00:10:05.905 START TEST locking_app_on_locked_coremask 00:10:05.905 ************************************ 00:10:05.905 05:07:24 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:10:05.905 05:07:24 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62872 00:10:05.905 05:07:24 -- event/cpu_locks.sh@116 -- # waitforlisten 62872 /var/tmp/spdk.sock 00:10:05.905 05:07:24 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:05.905 05:07:24 -- common/autotest_common.sh@819 -- # '[' -z 62872 ']' 00:10:05.905 05:07:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.905 05:07:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:05.905 05:07:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.905 05:07:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:05.905 05:07:24 -- common/autotest_common.sh@10 -- # set +x 00:10:05.905 [2024-07-26 05:07:25.010202] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:05.905 [2024-07-26 05:07:25.011135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62872 ] 00:10:06.164 [2024-07-26 05:07:25.180194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.423 [2024-07-26 05:07:25.349320] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:06.423 [2024-07-26 05:07:25.349562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.990 05:07:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:06.990 05:07:25 -- common/autotest_common.sh@852 -- # return 0 00:10:06.990 05:07:25 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62888 00:10:06.990 05:07:25 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62888 /var/tmp/spdk2.sock 00:10:06.990 05:07:25 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:06.990 05:07:25 -- common/autotest_common.sh@640 -- # local es=0 00:10:06.990 05:07:25 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 62888 /var/tmp/spdk2.sock 00:10:06.990 05:07:25 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:06.990 05:07:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:06.990 05:07:25 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:06.990 05:07:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:06.990 05:07:25 -- common/autotest_common.sh@643 -- # waitforlisten 62888 /var/tmp/spdk2.sock 00:10:06.990 05:07:25 -- common/autotest_common.sh@819 -- # '[' -z 62888 ']' 00:10:06.990 05:07:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:06.990 05:07:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:06.990 05:07:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:06.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:06.990 05:07:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:06.990 05:07:25 -- common/autotest_common.sh@10 -- # set +x 00:10:06.990 [2024-07-26 05:07:26.049387] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:06.990 [2024-07-26 05:07:26.049823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62888 ] 00:10:07.248 [2024-07-26 05:07:26.227950] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62872 has claimed it. 00:10:07.248 [2024-07-26 05:07:26.228045] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:07.815 ERROR: process (pid: 62888) is no longer running 00:10:07.815 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (62888) - No such process 00:10:07.815 05:07:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:07.815 05:07:26 -- common/autotest_common.sh@852 -- # return 1 00:10:07.815 05:07:26 -- common/autotest_common.sh@643 -- # es=1 00:10:07.815 05:07:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:07.815 05:07:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:07.815 05:07:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:07.815 05:07:26 -- event/cpu_locks.sh@122 -- # locks_exist 62872 00:10:07.815 05:07:26 -- event/cpu_locks.sh@22 -- # lslocks -p 62872 00:10:07.815 05:07:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:08.400 05:07:27 -- event/cpu_locks.sh@124 -- # killprocess 62872 00:10:08.400 05:07:27 -- common/autotest_common.sh@926 -- # '[' -z 62872 ']' 00:10:08.400 05:07:27 -- common/autotest_common.sh@930 -- # kill -0 62872 00:10:08.400 05:07:27 -- common/autotest_common.sh@931 -- # uname 00:10:08.400 05:07:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:08.400 05:07:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62872 00:10:08.400 killing process with pid 62872 00:10:08.400 05:07:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:08.400 05:07:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:08.400 05:07:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62872' 00:10:08.400 05:07:27 -- common/autotest_common.sh@945 -- # kill 62872 00:10:08.400 05:07:27 -- common/autotest_common.sh@950 -- # wait 62872 00:10:10.303 00:10:10.303 real 0m4.168s 00:10:10.303 user 0m4.551s 00:10:10.303 sys 0m0.771s 00:10:10.303 ************************************ 00:10:10.303 END TEST locking_app_on_locked_coremask 00:10:10.303 ************************************ 00:10:10.303 05:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.303 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:10:10.303 05:07:29 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:10.303 05:07:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:10.303 05:07:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:10.303 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:10:10.303 ************************************ 00:10:10.303 START TEST locking_overlapped_coremask 00:10:10.303 ************************************ 00:10:10.303 05:07:29 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:10:10.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.303 05:07:29 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62952 00:10:10.303 05:07:29 -- event/cpu_locks.sh@133 -- # waitforlisten 62952 /var/tmp/spdk.sock 00:10:10.303 05:07:29 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:10.303 05:07:29 -- common/autotest_common.sh@819 -- # '[' -z 62952 ']' 00:10:10.303 05:07:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.303 05:07:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:10.303 05:07:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.303 05:07:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:10.303 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:10:10.303 [2024-07-26 05:07:29.229436] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:10.303 [2024-07-26 05:07:29.229615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62952 ] 00:10:10.303 [2024-07-26 05:07:29.400611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:10.561 [2024-07-26 05:07:29.571033] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:10.561 [2024-07-26 05:07:29.571609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.561 [2024-07-26 05:07:29.571692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.561 [2024-07-26 05:07:29.571706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.936 05:07:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:11.936 05:07:30 -- common/autotest_common.sh@852 -- # return 0 00:10:11.936 05:07:30 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:11.936 05:07:30 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62978 00:10:11.936 05:07:30 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62978 /var/tmp/spdk2.sock 00:10:11.936 05:07:30 -- common/autotest_common.sh@640 -- # local es=0 00:10:11.936 05:07:30 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 62978 /var/tmp/spdk2.sock 00:10:11.936 05:07:30 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:11.936 05:07:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:11.936 05:07:30 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:11.936 05:07:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:11.936 05:07:30 -- common/autotest_common.sh@643 -- # waitforlisten 62978 /var/tmp/spdk2.sock 00:10:11.936 05:07:30 -- common/autotest_common.sh@819 -- # '[' -z 62978 ']' 00:10:11.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:11.936 05:07:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:11.936 05:07:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:11.936 05:07:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:11.936 05:07:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:11.936 05:07:30 -- common/autotest_common.sh@10 -- # set +x 00:10:11.936 [2024-07-26 05:07:30.939571] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:11.936 [2024-07-26 05:07:30.939738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62978 ] 00:10:12.195 [2024-07-26 05:07:31.118471] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62952 has claimed it. 00:10:12.195 [2024-07-26 05:07:31.118583] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:12.761 ERROR: process (pid: 62978) is no longer running 00:10:12.761 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (62978) - No such process 00:10:12.761 05:07:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:12.761 05:07:31 -- common/autotest_common.sh@852 -- # return 1 00:10:12.761 05:07:31 -- common/autotest_common.sh@643 -- # es=1 00:10:12.761 05:07:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:12.761 05:07:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:12.761 05:07:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:12.761 05:07:31 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:12.761 05:07:31 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:12.761 05:07:31 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:12.761 05:07:31 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:12.761 05:07:31 -- event/cpu_locks.sh@141 -- # killprocess 62952 00:10:12.761 05:07:31 -- common/autotest_common.sh@926 -- # '[' -z 62952 ']' 00:10:12.761 05:07:31 -- common/autotest_common.sh@930 -- # kill -0 62952 00:10:12.761 05:07:31 -- common/autotest_common.sh@931 -- # uname 00:10:12.761 05:07:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:12.761 05:07:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62952 00:10:12.761 05:07:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:12.761 05:07:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:12.761 05:07:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62952' 00:10:12.761 killing process with pid 62952 00:10:12.761 05:07:31 -- common/autotest_common.sh@945 -- # kill 62952 00:10:12.761 05:07:31 -- common/autotest_common.sh@950 -- # wait 62952 00:10:14.663 00:10:14.663 real 0m4.455s 00:10:14.663 user 0m12.176s 00:10:14.663 sys 0m0.574s 00:10:14.663 05:07:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.663 05:07:33 -- common/autotest_common.sh@10 -- # set +x 00:10:14.663 ************************************ 00:10:14.663 END TEST locking_overlapped_coremask 00:10:14.663 ************************************ 00:10:14.663 05:07:33 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:14.663 05:07:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:14.663 05:07:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:14.663 05:07:33 -- common/autotest_common.sh@10 -- # set +x 00:10:14.663 ************************************ 00:10:14.663 START TEST locking_overlapped_coremask_via_rpc 00:10:14.663 ************************************ 00:10:14.663 05:07:33 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:10:14.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.663 05:07:33 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63042 00:10:14.663 05:07:33 -- event/cpu_locks.sh@149 -- # waitforlisten 63042 /var/tmp/spdk.sock 00:10:14.663 05:07:33 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:14.663 05:07:33 -- common/autotest_common.sh@819 -- # '[' -z 63042 ']' 00:10:14.663 05:07:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.663 05:07:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:14.663 05:07:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.663 05:07:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:14.663 05:07:33 -- common/autotest_common.sh@10 -- # set +x 00:10:14.663 [2024-07-26 05:07:33.727271] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:14.663 [2024-07-26 05:07:33.727411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63042 ] 00:10:14.922 [2024-07-26 05:07:33.885252] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:14.922 [2024-07-26 05:07:33.885578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.181 [2024-07-26 05:07:34.058606] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:15.181 [2024-07-26 05:07:34.059201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.181 [2024-07-26 05:07:34.059293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.181 [2024-07-26 05:07:34.059310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.558 05:07:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:16.558 05:07:35 -- common/autotest_common.sh@852 -- # return 0 00:10:16.558 05:07:35 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:16.558 05:07:35 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63068 00:10:16.558 05:07:35 -- event/cpu_locks.sh@153 -- # waitforlisten 63068 /var/tmp/spdk2.sock 00:10:16.558 05:07:35 -- common/autotest_common.sh@819 -- # '[' -z 63068 ']' 00:10:16.558 05:07:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:16.558 05:07:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:16.558 05:07:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:16.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:16.558 05:07:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:16.558 05:07:35 -- common/autotest_common.sh@10 -- # set +x 00:10:16.558 [2024-07-26 05:07:35.454643] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:16.558 [2024-07-26 05:07:35.454796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63068 ] 00:10:16.558 [2024-07-26 05:07:35.629687] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:16.558 [2024-07-26 05:07:35.629751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.124 [2024-07-26 05:07:35.977112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:17.124 [2024-07-26 05:07:35.977554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.124 [2024-07-26 05:07:35.981196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.124 [2024-07-26 05:07:35.981203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:19.025 05:07:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:19.025 05:07:37 -- common/autotest_common.sh@852 -- # return 0 00:10:19.025 05:07:37 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:19.025 05:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:19.025 05:07:37 -- common/autotest_common.sh@10 -- # set +x 00:10:19.025 05:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:19.025 05:07:37 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:19.025 05:07:37 -- common/autotest_common.sh@640 -- # local es=0 00:10:19.025 05:07:37 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:19.025 05:07:37 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:10:19.025 05:07:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:19.025 05:07:37 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:10:19.025 05:07:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:19.025 05:07:37 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:19.025 05:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:19.025 05:07:37 -- common/autotest_common.sh@10 -- # set +x 00:10:19.025 [2024-07-26 05:07:37.941223] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63042 has claimed it. 00:10:19.025 request: 00:10:19.025 { 00:10:19.025 "method": "framework_enable_cpumask_locks", 00:10:19.025 "req_id": 1 00:10:19.025 } 00:10:19.025 Got JSON-RPC error response 00:10:19.025 response: 00:10:19.025 { 00:10:19.025 "code": -32603, 00:10:19.025 "message": "Failed to claim CPU core: 2" 00:10:19.025 } 00:10:19.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.025 05:07:37 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:10:19.025 05:07:37 -- common/autotest_common.sh@643 -- # es=1 00:10:19.025 05:07:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:19.025 05:07:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:19.025 05:07:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:19.025 05:07:37 -- event/cpu_locks.sh@158 -- # waitforlisten 63042 /var/tmp/spdk.sock 00:10:19.025 05:07:37 -- common/autotest_common.sh@819 -- # '[' -z 63042 ']' 00:10:19.026 05:07:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.026 05:07:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:19.026 05:07:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.026 05:07:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:19.026 05:07:37 -- common/autotest_common.sh@10 -- # set +x 00:10:19.285 05:07:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:19.285 05:07:38 -- common/autotest_common.sh@852 -- # return 0 00:10:19.285 05:07:38 -- event/cpu_locks.sh@159 -- # waitforlisten 63068 /var/tmp/spdk2.sock 00:10:19.285 05:07:38 -- common/autotest_common.sh@819 -- # '[' -z 63068 ']' 00:10:19.285 05:07:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:19.285 05:07:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:19.285 05:07:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:19.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:19.285 05:07:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:19.285 05:07:38 -- common/autotest_common.sh@10 -- # set +x 00:10:19.543 05:07:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:19.543 05:07:38 -- common/autotest_common.sh@852 -- # return 0 00:10:19.543 05:07:38 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:19.543 05:07:38 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:19.543 ************************************ 00:10:19.543 END TEST locking_overlapped_coremask_via_rpc 00:10:19.543 ************************************ 00:10:19.543 05:07:38 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:19.543 05:07:38 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:19.543 00:10:19.543 real 0m4.818s 00:10:19.543 user 0m1.952s 00:10:19.543 sys 0m0.306s 00:10:19.543 05:07:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.543 05:07:38 -- common/autotest_common.sh@10 -- # set +x 00:10:19.543 05:07:38 -- event/cpu_locks.sh@174 -- # cleanup 00:10:19.543 05:07:38 -- event/cpu_locks.sh@15 -- # [[ -z 63042 ]] 00:10:19.543 05:07:38 -- event/cpu_locks.sh@15 -- # killprocess 63042 00:10:19.543 05:07:38 -- common/autotest_common.sh@926 -- # '[' -z 63042 ']' 00:10:19.543 05:07:38 -- common/autotest_common.sh@930 -- # kill -0 63042 00:10:19.543 05:07:38 -- common/autotest_common.sh@931 -- # uname 00:10:19.543 05:07:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:19.543 05:07:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63042 00:10:19.543 killing process with pid 63042 00:10:19.543 05:07:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:19.543 05:07:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:19.543 05:07:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63042' 00:10:19.543 05:07:38 -- common/autotest_common.sh@945 -- # kill 63042 00:10:19.543 05:07:38 -- common/autotest_common.sh@950 -- # wait 63042 00:10:22.116 05:07:40 -- event/cpu_locks.sh@16 -- # [[ -z 63068 ]] 00:10:22.116 05:07:40 -- event/cpu_locks.sh@16 -- # killprocess 63068 00:10:22.116 05:07:40 -- common/autotest_common.sh@926 -- # '[' -z 63068 ']' 00:10:22.116 05:07:40 -- common/autotest_common.sh@930 -- # kill -0 63068 00:10:22.116 05:07:40 -- common/autotest_common.sh@931 -- # uname 00:10:22.116 05:07:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:22.116 05:07:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63068 00:10:22.116 killing process with pid 63068 00:10:22.116 05:07:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:22.116 05:07:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:22.116 05:07:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63068' 00:10:22.116 05:07:40 -- common/autotest_common.sh@945 -- # kill 63068 00:10:22.116 05:07:40 -- common/autotest_common.sh@950 -- # wait 63068 00:10:24.021 05:07:42 -- event/cpu_locks.sh@18 -- # rm -f 00:10:24.021 05:07:42 -- event/cpu_locks.sh@1 -- # cleanup 00:10:24.021 05:07:42 -- event/cpu_locks.sh@15 -- # [[ -z 63042 ]] 00:10:24.021 05:07:42 -- event/cpu_locks.sh@15 -- # killprocess 63042 00:10:24.021 05:07:42 -- common/autotest_common.sh@926 -- # '[' -z 63042 ']' 00:10:24.021 05:07:42 -- common/autotest_common.sh@930 -- # kill -0 63042 00:10:24.021 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (63042) - No such process 00:10:24.021 Process with pid 63042 is not found 00:10:24.021 05:07:42 -- common/autotest_common.sh@953 -- # echo 'Process with pid 63042 is not found' 00:10:24.021 05:07:42 -- event/cpu_locks.sh@16 -- # [[ -z 63068 ]] 00:10:24.021 05:07:42 -- event/cpu_locks.sh@16 -- # killprocess 63068 00:10:24.021 05:07:42 -- common/autotest_common.sh@926 -- # '[' -z 63068 ']' 00:10:24.021 05:07:42 -- common/autotest_common.sh@930 -- # kill -0 63068 00:10:24.021 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (63068) - No such process 00:10:24.021 Process with pid 63068 is not found 00:10:24.021 05:07:42 -- common/autotest_common.sh@953 -- # echo 'Process with pid 63068 is not found' 00:10:24.021 05:07:42 -- event/cpu_locks.sh@18 -- # rm -f 00:10:24.021 00:10:24.021 real 0m48.336s 00:10:24.021 user 1m25.301s 00:10:24.021 sys 0m6.900s 00:10:24.021 ************************************ 00:10:24.021 END TEST cpu_locks 00:10:24.021 ************************************ 00:10:24.021 05:07:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.021 05:07:42 -- common/autotest_common.sh@10 -- # set +x 00:10:24.021 ************************************ 00:10:24.021 END TEST event 00:10:24.021 ************************************ 00:10:24.021 00:10:24.021 real 1m18.778s 00:10:24.021 user 2m23.302s 00:10:24.021 sys 0m10.765s 00:10:24.021 05:07:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.021 05:07:42 -- common/autotest_common.sh@10 -- # set +x 00:10:24.021 05:07:42 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:24.021 05:07:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:24.021 05:07:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.021 05:07:42 -- common/autotest_common.sh@10 -- # set +x 00:10:24.021 ************************************ 00:10:24.021 START TEST thread 00:10:24.021 ************************************ 00:10:24.021 05:07:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:24.021 * Looking for test storage... 00:10:24.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:24.021 05:07:42 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:24.021 05:07:42 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:24.021 05:07:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.021 05:07:42 -- common/autotest_common.sh@10 -- # set +x 00:10:24.021 ************************************ 00:10:24.021 START TEST thread_poller_perf 00:10:24.021 ************************************ 00:10:24.021 05:07:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:24.021 [2024-07-26 05:07:43.013745] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:24.021 [2024-07-26 05:07:43.013911] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63245 ] 00:10:24.280 [2024-07-26 05:07:43.179691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.537 [2024-07-26 05:07:43.406370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.537 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:25.913 ====================================== 00:10:25.913 busy:2214447980 (cyc) 00:10:25.913 total_run_count: 325000 00:10:25.913 tsc_hz: 2200000000 (cyc) 00:10:25.913 ====================================== 00:10:25.913 poller_cost: 6813 (cyc), 3096 (nsec) 00:10:25.913 00:10:25.913 real 0m1.793s 00:10:25.913 user 0m1.591s 00:10:25.913 sys 0m0.101s 00:10:25.913 05:07:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.913 ************************************ 00:10:25.913 END TEST thread_poller_perf 00:10:25.913 ************************************ 00:10:25.913 05:07:44 -- common/autotest_common.sh@10 -- # set +x 00:10:25.913 05:07:44 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:25.913 05:07:44 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:25.913 05:07:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.913 05:07:44 -- common/autotest_common.sh@10 -- # set +x 00:10:25.913 ************************************ 00:10:25.913 START TEST thread_poller_perf 00:10:25.913 ************************************ 00:10:25.913 05:07:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:25.913 [2024-07-26 05:07:44.858086] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:25.913 [2024-07-26 05:07:44.858248] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63287 ] 00:10:26.171 [2024-07-26 05:07:45.031811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.171 [2024-07-26 05:07:45.247588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.171 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:27.547 ====================================== 00:10:27.547 busy:2204150770 (cyc) 00:10:27.547 total_run_count: 4248000 00:10:27.547 tsc_hz: 2200000000 (cyc) 00:10:27.547 ====================================== 00:10:27.547 poller_cost: 518 (cyc), 235 (nsec) 00:10:27.547 00:10:27.547 real 0m1.791s 00:10:27.547 user 0m1.586s 00:10:27.547 sys 0m0.105s 00:10:27.547 05:07:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.547 05:07:46 -- common/autotest_common.sh@10 -- # set +x 00:10:27.547 ************************************ 00:10:27.547 END TEST thread_poller_perf 00:10:27.547 ************************************ 00:10:27.806 05:07:46 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:27.806 05:07:46 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:27.806 05:07:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:27.806 05:07:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:27.806 05:07:46 -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 ************************************ 00:10:27.806 START TEST thread_spdk_lock 00:10:27.806 ************************************ 00:10:27.806 05:07:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:27.806 [2024-07-26 05:07:46.709182] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:27.806 [2024-07-26 05:07:46.709351] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63329 ] 00:10:27.806 [2024-07-26 05:07:46.877887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:28.064 [2024-07-26 05:07:47.040925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.064 [2024-07-26 05:07:47.040940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.673 [2024-07-26 05:07:47.580451] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:28.673 [2024-07-26 05:07:47.580541] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:28.673 [2024-07-26 05:07:47.580578] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x5bfa96f21500 00:10:28.673 [2024-07-26 05:07:47.589498] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:28.673 [2024-07-26 05:07:47.589620] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:28.673 [2024-07-26 05:07:47.589658] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:28.931 Starting test contend 00:10:28.931 Worker Delay Wait us Hold us Total us 00:10:28.931 0 3 118309 200149 318458 00:10:28.931 1 5 50911 304052 354963 00:10:28.931 PASS test contend 00:10:28.931 Starting test hold_by_poller 00:10:28.931 PASS test hold_by_poller 00:10:28.931 Starting test hold_by_message 00:10:28.932 PASS test hold_by_message 00:10:28.932 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:28.932 100014 assertions passed 00:10:28.932 0 assertions failed 00:10:28.932 00:10:28.932 real 0m1.275s 00:10:28.932 user 0m1.627s 00:10:28.932 sys 0m0.098s 00:10:28.932 05:07:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.932 05:07:47 -- common/autotest_common.sh@10 -- # set +x 00:10:28.932 ************************************ 00:10:28.932 END TEST thread_spdk_lock 00:10:28.932 ************************************ 00:10:28.932 00:10:28.932 real 0m5.101s 00:10:28.932 user 0m4.900s 00:10:28.932 sys 0m0.442s 00:10:28.932 05:07:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.932 05:07:47 -- common/autotest_common.sh@10 -- # set +x 00:10:28.932 ************************************ 00:10:28.932 END TEST thread 00:10:28.932 ************************************ 00:10:28.932 05:07:48 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:28.932 05:07:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:28.932 05:07:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:28.932 05:07:48 -- common/autotest_common.sh@10 -- # set +x 00:10:28.932 ************************************ 00:10:28.932 START TEST accel 00:10:28.932 ************************************ 00:10:28.932 05:07:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:29.191 * Looking for test storage... 00:10:29.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:29.191 05:07:48 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:29.191 05:07:48 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:29.191 05:07:48 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:29.191 05:07:48 -- accel/accel.sh@59 -- # spdk_tgt_pid=63399 00:10:29.191 05:07:48 -- accel/accel.sh@60 -- # waitforlisten 63399 00:10:29.191 05:07:48 -- common/autotest_common.sh@819 -- # '[' -z 63399 ']' 00:10:29.191 05:07:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.191 05:07:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:29.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.191 05:07:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.191 05:07:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:29.191 05:07:48 -- common/autotest_common.sh@10 -- # set +x 00:10:29.191 05:07:48 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:29.191 05:07:48 -- accel/accel.sh@58 -- # build_accel_config 00:10:29.191 05:07:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.191 05:07:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.191 05:07:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.191 05:07:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.191 05:07:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.191 05:07:48 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.191 05:07:48 -- accel/accel.sh@42 -- # jq -r . 00:10:29.191 [2024-07-26 05:07:48.188061] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:29.191 [2024-07-26 05:07:48.188217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63399 ] 00:10:29.450 [2024-07-26 05:07:48.354301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.450 [2024-07-26 05:07:48.513694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:29.450 [2024-07-26 05:07:48.514026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.827 05:07:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:30.827 05:07:49 -- common/autotest_common.sh@852 -- # return 0 00:10:30.827 05:07:49 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:30.827 05:07:49 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:30.827 05:07:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:30.827 05:07:49 -- common/autotest_common.sh@10 -- # set +x 00:10:30.827 05:07:49 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:30.827 05:07:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.827 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.827 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.827 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.828 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.828 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.828 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.828 05:07:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:30.828 05:07:49 -- accel/accel.sh@64 -- # IFS== 00:10:30.828 05:07:49 -- accel/accel.sh@64 -- # read -r opc module 00:10:30.828 05:07:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:30.828 05:07:49 -- accel/accel.sh@67 -- # killprocess 63399 00:10:30.828 05:07:49 -- common/autotest_common.sh@926 -- # '[' -z 63399 ']' 00:10:30.828 05:07:49 -- common/autotest_common.sh@930 -- # kill -0 63399 00:10:30.828 05:07:49 -- common/autotest_common.sh@931 -- # uname 00:10:30.828 05:07:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:30.828 05:07:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63399 00:10:30.828 05:07:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:30.828 05:07:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:30.828 05:07:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63399' 00:10:30.828 killing process with pid 63399 00:10:30.828 05:07:49 -- common/autotest_common.sh@945 -- # kill 63399 00:10:30.828 05:07:49 -- common/autotest_common.sh@950 -- # wait 63399 00:10:32.731 05:07:51 -- accel/accel.sh@68 -- # trap - ERR 00:10:32.731 05:07:51 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:32.731 05:07:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:32.731 05:07:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:32.731 05:07:51 -- common/autotest_common.sh@10 -- # set +x 00:10:32.731 05:07:51 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:10:32.731 05:07:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:32.731 05:07:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.731 05:07:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.732 05:07:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.732 05:07:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.732 05:07:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.732 05:07:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.732 05:07:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.732 05:07:51 -- accel/accel.sh@42 -- # jq -r . 00:10:32.732 05:07:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.732 05:07:51 -- common/autotest_common.sh@10 -- # set +x 00:10:32.990 05:07:51 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:32.990 05:07:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:32.990 05:07:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:32.990 05:07:51 -- common/autotest_common.sh@10 -- # set +x 00:10:32.990 ************************************ 00:10:32.990 START TEST accel_missing_filename 00:10:32.990 ************************************ 00:10:32.990 05:07:51 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:10:32.990 05:07:51 -- common/autotest_common.sh@640 -- # local es=0 00:10:32.990 05:07:51 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:32.990 05:07:51 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:32.990 05:07:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:32.990 05:07:51 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:32.990 05:07:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:32.990 05:07:51 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:10:32.990 05:07:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:32.990 05:07:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.991 05:07:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.991 05:07:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.991 05:07:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.991 05:07:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.991 05:07:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.991 05:07:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.991 05:07:51 -- accel/accel.sh@42 -- # jq -r . 00:10:32.991 [2024-07-26 05:07:51.901266] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:32.991 [2024-07-26 05:07:51.901425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63481 ] 00:10:32.991 [2024-07-26 05:07:52.054825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.249 [2024-07-26 05:07:52.217572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.508 [2024-07-26 05:07:52.373704] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:33.766 [2024-07-26 05:07:52.784900] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:34.025 A filename is required. 00:10:34.284 05:07:53 -- common/autotest_common.sh@643 -- # es=234 00:10:34.284 05:07:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:34.284 05:07:53 -- common/autotest_common.sh@652 -- # es=106 00:10:34.284 05:07:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:34.284 05:07:53 -- common/autotest_common.sh@660 -- # es=1 00:10:34.284 05:07:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:34.284 00:10:34.284 real 0m1.280s 00:10:34.284 user 0m1.042s 00:10:34.284 sys 0m0.142s 00:10:34.284 05:07:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.284 ************************************ 00:10:34.284 END TEST accel_missing_filename 00:10:34.284 ************************************ 00:10:34.284 05:07:53 -- common/autotest_common.sh@10 -- # set +x 00:10:34.284 05:07:53 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:34.284 05:07:53 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:34.284 05:07:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:34.284 05:07:53 -- common/autotest_common.sh@10 -- # set +x 00:10:34.284 ************************************ 00:10:34.284 START TEST accel_compress_verify 00:10:34.284 ************************************ 00:10:34.284 05:07:53 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:34.284 05:07:53 -- common/autotest_common.sh@640 -- # local es=0 00:10:34.284 05:07:53 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:34.284 05:07:53 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:34.284 05:07:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:34.284 05:07:53 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:34.284 05:07:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:34.284 05:07:53 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:34.284 05:07:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:34.284 05:07:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.284 05:07:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.284 05:07:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.284 05:07:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.284 05:07:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.284 05:07:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.284 05:07:53 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.284 05:07:53 -- accel/accel.sh@42 -- # jq -r . 00:10:34.284 [2024-07-26 05:07:53.238487] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:34.284 [2024-07-26 05:07:53.238644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63513 ] 00:10:34.543 [2024-07-26 05:07:53.408885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.543 [2024-07-26 05:07:53.569304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.802 [2024-07-26 05:07:53.732858] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:35.060 [2024-07-26 05:07:54.145141] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:35.629 00:10:35.629 Compression does not support the verify option, aborting. 00:10:35.629 05:07:54 -- common/autotest_common.sh@643 -- # es=161 00:10:35.629 05:07:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:35.629 05:07:54 -- common/autotest_common.sh@652 -- # es=33 00:10:35.629 05:07:54 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:35.629 05:07:54 -- common/autotest_common.sh@660 -- # es=1 00:10:35.629 05:07:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:35.629 00:10:35.629 real 0m1.299s 00:10:35.629 user 0m1.046s 00:10:35.629 sys 0m0.160s 00:10:35.629 05:07:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.629 05:07:54 -- common/autotest_common.sh@10 -- # set +x 00:10:35.629 ************************************ 00:10:35.629 END TEST accel_compress_verify 00:10:35.629 ************************************ 00:10:35.629 05:07:54 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:35.629 05:07:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:35.629 05:07:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.629 05:07:54 -- common/autotest_common.sh@10 -- # set +x 00:10:35.629 ************************************ 00:10:35.629 START TEST accel_wrong_workload 00:10:35.629 ************************************ 00:10:35.629 05:07:54 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:10:35.629 05:07:54 -- common/autotest_common.sh@640 -- # local es=0 00:10:35.629 05:07:54 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:35.629 05:07:54 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:35.629 05:07:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:35.629 05:07:54 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:35.629 05:07:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:35.629 05:07:54 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:10:35.629 05:07:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:35.629 05:07:54 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.629 05:07:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.629 05:07:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.629 05:07:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.629 05:07:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.629 05:07:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.629 05:07:54 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.629 05:07:54 -- accel/accel.sh@42 -- # jq -r . 00:10:35.629 Unsupported workload type: foobar 00:10:35.629 [2024-07-26 05:07:54.583438] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:35.629 accel_perf options: 00:10:35.629 [-h help message] 00:10:35.629 [-q queue depth per core] 00:10:35.629 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:35.629 [-T number of threads per core 00:10:35.629 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:35.629 [-t time in seconds] 00:10:35.629 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:35.629 [ dif_verify, , dif_generate, dif_generate_copy 00:10:35.629 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:35.629 [-l for compress/decompress workloads, name of uncompressed input file 00:10:35.629 [-S for crc32c workload, use this seed value (default 0) 00:10:35.629 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:35.629 [-f for fill workload, use this BYTE value (default 255) 00:10:35.629 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:35.629 [-y verify result if this switch is on] 00:10:35.629 [-a tasks to allocate per core (default: same value as -q)] 00:10:35.629 Can be used to spread operations across a wider range of memory. 00:10:35.629 05:07:54 -- common/autotest_common.sh@643 -- # es=1 00:10:35.629 05:07:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:35.629 05:07:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:35.629 05:07:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:35.629 00:10:35.629 real 0m0.067s 00:10:35.629 user 0m0.034s 00:10:35.629 sys 0m0.042s 00:10:35.629 05:07:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.629 ************************************ 00:10:35.629 END TEST accel_wrong_workload 00:10:35.629 ************************************ 00:10:35.629 05:07:54 -- common/autotest_common.sh@10 -- # set +x 00:10:35.629 05:07:54 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:35.629 05:07:54 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:35.629 05:07:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.629 05:07:54 -- common/autotest_common.sh@10 -- # set +x 00:10:35.629 ************************************ 00:10:35.629 START TEST accel_negative_buffers 00:10:35.629 ************************************ 00:10:35.629 05:07:54 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:35.629 05:07:54 -- common/autotest_common.sh@640 -- # local es=0 00:10:35.629 05:07:54 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:35.629 05:07:54 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:35.629 05:07:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:35.629 05:07:54 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:35.629 05:07:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:35.629 05:07:54 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:10:35.629 05:07:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:35.629 05:07:54 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.629 05:07:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.629 05:07:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.629 05:07:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.629 05:07:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.629 05:07:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.629 05:07:54 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.629 05:07:54 -- accel/accel.sh@42 -- # jq -r . 00:10:35.629 -x option must be non-negative. 00:10:35.629 [2024-07-26 05:07:54.698156] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:35.629 accel_perf options: 00:10:35.629 [-h help message] 00:10:35.629 [-q queue depth per core] 00:10:35.629 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:35.629 [-T number of threads per core 00:10:35.629 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:35.629 [-t time in seconds] 00:10:35.629 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:35.629 [ dif_verify, , dif_generate, dif_generate_copy 00:10:35.629 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:35.629 [-l for compress/decompress workloads, name of uncompressed input file 00:10:35.629 [-S for crc32c workload, use this seed value (default 0) 00:10:35.629 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:35.629 [-f for fill workload, use this BYTE value (default 255) 00:10:35.629 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:35.629 [-y verify result if this switch is on] 00:10:35.629 [-a tasks to allocate per core (default: same value as -q)] 00:10:35.629 Can be used to spread operations across a wider range of memory. 00:10:35.629 05:07:54 -- common/autotest_common.sh@643 -- # es=1 00:10:35.629 05:07:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:35.629 05:07:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:35.629 05:07:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:35.629 00:10:35.629 real 0m0.066s 00:10:35.630 user 0m0.034s 00:10:35.630 sys 0m0.040s 00:10:35.630 05:07:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.630 ************************************ 00:10:35.630 END TEST accel_negative_buffers 00:10:35.630 ************************************ 00:10:35.630 05:07:54 -- common/autotest_common.sh@10 -- # set +x 00:10:35.889 05:07:54 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:35.889 05:07:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:35.889 05:07:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.889 05:07:54 -- common/autotest_common.sh@10 -- # set +x 00:10:35.889 ************************************ 00:10:35.889 START TEST accel_crc32c 00:10:35.889 ************************************ 00:10:35.889 05:07:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:35.889 05:07:54 -- accel/accel.sh@16 -- # local accel_opc 00:10:35.889 05:07:54 -- accel/accel.sh@17 -- # local accel_module 00:10:35.889 05:07:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:35.889 05:07:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:35.889 05:07:54 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.889 05:07:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.889 05:07:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.889 05:07:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.889 05:07:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.889 05:07:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.889 05:07:54 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.889 05:07:54 -- accel/accel.sh@42 -- # jq -r . 00:10:35.889 [2024-07-26 05:07:54.809125] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:35.889 [2024-07-26 05:07:54.809276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63586 ] 00:10:35.889 [2024-07-26 05:07:54.980439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.148 [2024-07-26 05:07:55.145466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.052 05:07:57 -- accel/accel.sh@18 -- # out=' 00:10:38.052 SPDK Configuration: 00:10:38.052 Core mask: 0x1 00:10:38.052 00:10:38.052 Accel Perf Configuration: 00:10:38.052 Workload Type: crc32c 00:10:38.052 CRC-32C seed: 32 00:10:38.052 Transfer size: 4096 bytes 00:10:38.052 Vector count 1 00:10:38.052 Module: software 00:10:38.052 Queue depth: 32 00:10:38.052 Allocate depth: 32 00:10:38.052 # threads/core: 1 00:10:38.052 Run time: 1 seconds 00:10:38.052 Verify: Yes 00:10:38.052 00:10:38.052 Running for 1 seconds... 00:10:38.052 00:10:38.052 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:38.052 ------------------------------------------------------------------------------------ 00:10:38.052 0,0 444608/s 1736 MiB/s 0 0 00:10:38.052 ==================================================================================== 00:10:38.052 Total 444608/s 1736 MiB/s 0 0' 00:10:38.052 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.052 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.052 05:07:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:38.052 05:07:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:38.052 05:07:57 -- accel/accel.sh@12 -- # build_accel_config 00:10:38.052 05:07:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:38.052 05:07:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.052 05:07:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.052 05:07:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:38.052 05:07:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:38.052 05:07:57 -- accel/accel.sh@41 -- # local IFS=, 00:10:38.052 05:07:57 -- accel/accel.sh@42 -- # jq -r . 00:10:38.052 [2024-07-26 05:07:57.124654] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:38.052 [2024-07-26 05:07:57.124804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63612 ] 00:10:38.310 [2024-07-26 05:07:57.292093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.569 [2024-07-26 05:07:57.463495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val= 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val= 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val=0x1 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val= 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val= 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val=crc32c 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val=32 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val= 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val=software 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@23 -- # accel_module=software 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val=32 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val=32 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val=1 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val=Yes 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val= 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.569 05:07:57 -- accel/accel.sh@21 -- # val= 00:10:38.569 05:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.569 05:07:57 -- accel/accel.sh@20 -- # read -r var val 00:10:40.567 05:07:59 -- accel/accel.sh@21 -- # val= 00:10:40.567 05:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.567 05:07:59 -- accel/accel.sh@21 -- # val= 00:10:40.567 05:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.567 05:07:59 -- accel/accel.sh@21 -- # val= 00:10:40.567 05:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.567 05:07:59 -- accel/accel.sh@21 -- # val= 00:10:40.567 05:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.567 05:07:59 -- accel/accel.sh@21 -- # val= 00:10:40.567 05:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.567 05:07:59 -- accel/accel.sh@21 -- # val= 00:10:40.567 05:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.567 05:07:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.567 05:07:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:40.567 05:07:59 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:40.567 05:07:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:40.567 00:10:40.567 real 0m4.599s 00:10:40.567 user 0m4.095s 00:10:40.567 sys 0m0.320s 00:10:40.567 05:07:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.567 05:07:59 -- common/autotest_common.sh@10 -- # set +x 00:10:40.567 ************************************ 00:10:40.567 END TEST accel_crc32c 00:10:40.567 ************************************ 00:10:40.567 05:07:59 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:40.567 05:07:59 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:40.567 05:07:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:40.567 05:07:59 -- common/autotest_common.sh@10 -- # set +x 00:10:40.567 ************************************ 00:10:40.567 START TEST accel_crc32c_C2 00:10:40.567 ************************************ 00:10:40.567 05:07:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:40.567 05:07:59 -- accel/accel.sh@16 -- # local accel_opc 00:10:40.567 05:07:59 -- accel/accel.sh@17 -- # local accel_module 00:10:40.568 05:07:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:40.568 05:07:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:40.568 05:07:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.568 05:07:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.568 05:07:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.568 05:07:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.568 05:07:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.568 05:07:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.568 05:07:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.568 05:07:59 -- accel/accel.sh@42 -- # jq -r . 00:10:40.568 [2024-07-26 05:07:59.461962] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:40.568 [2024-07-26 05:07:59.462145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63653 ] 00:10:40.568 [2024-07-26 05:07:59.628447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.826 [2024-07-26 05:07:59.803206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.728 05:08:01 -- accel/accel.sh@18 -- # out=' 00:10:42.728 SPDK Configuration: 00:10:42.728 Core mask: 0x1 00:10:42.728 00:10:42.728 Accel Perf Configuration: 00:10:42.728 Workload Type: crc32c 00:10:42.728 CRC-32C seed: 0 00:10:42.728 Transfer size: 4096 bytes 00:10:42.728 Vector count 2 00:10:42.728 Module: software 00:10:42.728 Queue depth: 32 00:10:42.728 Allocate depth: 32 00:10:42.728 # threads/core: 1 00:10:42.728 Run time: 1 seconds 00:10:42.728 Verify: Yes 00:10:42.728 00:10:42.728 Running for 1 seconds... 00:10:42.728 00:10:42.728 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:42.728 ------------------------------------------------------------------------------------ 00:10:42.728 0,0 343072/s 2680 MiB/s 0 0 00:10:42.728 ==================================================================================== 00:10:42.728 Total 343072/s 1340 MiB/s 0 0' 00:10:42.728 05:08:01 -- accel/accel.sh@20 -- # IFS=: 00:10:42.728 05:08:01 -- accel/accel.sh@20 -- # read -r var val 00:10:42.728 05:08:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:42.728 05:08:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:42.728 05:08:01 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.728 05:08:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.728 05:08:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.728 05:08:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.728 05:08:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.728 05:08:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.728 05:08:01 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.728 05:08:01 -- accel/accel.sh@42 -- # jq -r . 00:10:42.728 [2024-07-26 05:08:01.766678] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:42.728 [2024-07-26 05:08:01.766826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63683 ] 00:10:42.986 [2024-07-26 05:08:01.935356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.244 [2024-07-26 05:08:02.103099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val= 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val= 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val=0x1 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val= 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val= 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val=crc32c 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val=0 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val= 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val=software 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@23 -- # accel_module=software 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val=32 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val=32 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val=1 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val=Yes 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val= 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.244 05:08:02 -- accel/accel.sh@21 -- # val= 00:10:43.244 05:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.244 05:08:02 -- accel/accel.sh@20 -- # read -r var val 00:10:45.142 05:08:04 -- accel/accel.sh@21 -- # val= 00:10:45.142 05:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.142 05:08:04 -- accel/accel.sh@21 -- # val= 00:10:45.142 05:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.142 05:08:04 -- accel/accel.sh@21 -- # val= 00:10:45.142 05:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.142 05:08:04 -- accel/accel.sh@21 -- # val= 00:10:45.142 05:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.142 05:08:04 -- accel/accel.sh@21 -- # val= 00:10:45.142 05:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.142 05:08:04 -- accel/accel.sh@21 -- # val= 00:10:45.142 05:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.142 05:08:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.142 05:08:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:45.142 05:08:04 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:45.142 05:08:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:45.142 00:10:45.142 real 0m4.643s 00:10:45.142 user 0m4.153s 00:10:45.142 sys 0m0.307s 00:10:45.142 05:08:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.143 05:08:04 -- common/autotest_common.sh@10 -- # set +x 00:10:45.143 ************************************ 00:10:45.143 END TEST accel_crc32c_C2 00:10:45.143 ************************************ 00:10:45.143 05:08:04 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:45.143 05:08:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:45.143 05:08:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:45.143 05:08:04 -- common/autotest_common.sh@10 -- # set +x 00:10:45.143 ************************************ 00:10:45.143 START TEST accel_copy 00:10:45.143 ************************************ 00:10:45.143 05:08:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:45.143 05:08:04 -- accel/accel.sh@16 -- # local accel_opc 00:10:45.143 05:08:04 -- accel/accel.sh@17 -- # local accel_module 00:10:45.143 05:08:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:45.143 05:08:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:45.143 05:08:04 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.143 05:08:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.143 05:08:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.143 05:08:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.143 05:08:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.143 05:08:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.143 05:08:04 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.143 05:08:04 -- accel/accel.sh@42 -- # jq -r . 00:10:45.143 [2024-07-26 05:08:04.154742] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:45.143 [2024-07-26 05:08:04.154902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63731 ] 00:10:45.400 [2024-07-26 05:08:04.324274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.400 [2024-07-26 05:08:04.490786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.300 05:08:06 -- accel/accel.sh@18 -- # out=' 00:10:47.300 SPDK Configuration: 00:10:47.300 Core mask: 0x1 00:10:47.300 00:10:47.300 Accel Perf Configuration: 00:10:47.300 Workload Type: copy 00:10:47.300 Transfer size: 4096 bytes 00:10:47.300 Vector count 1 00:10:47.300 Module: software 00:10:47.300 Queue depth: 32 00:10:47.300 Allocate depth: 32 00:10:47.301 # threads/core: 1 00:10:47.301 Run time: 1 seconds 00:10:47.301 Verify: Yes 00:10:47.301 00:10:47.301 Running for 1 seconds... 00:10:47.301 00:10:47.301 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:47.301 ------------------------------------------------------------------------------------ 00:10:47.301 0,0 266496/s 1041 MiB/s 0 0 00:10:47.301 ==================================================================================== 00:10:47.301 Total 266496/s 1041 MiB/s 0 0' 00:10:47.301 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:47.301 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:47.301 05:08:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:47.559 05:08:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:47.559 05:08:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.559 05:08:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.559 05:08:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.559 05:08:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.559 05:08:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.559 05:08:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.559 05:08:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.559 05:08:06 -- accel/accel.sh@42 -- # jq -r . 00:10:47.559 [2024-07-26 05:08:06.447755] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:47.559 [2024-07-26 05:08:06.447904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63757 ] 00:10:47.559 [2024-07-26 05:08:06.615509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.818 [2024-07-26 05:08:06.790806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.076 05:08:06 -- accel/accel.sh@21 -- # val= 00:10:48.076 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.076 05:08:06 -- accel/accel.sh@21 -- # val= 00:10:48.076 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.076 05:08:06 -- accel/accel.sh@21 -- # val=0x1 00:10:48.076 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.076 05:08:06 -- accel/accel.sh@21 -- # val= 00:10:48.076 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.076 05:08:06 -- accel/accel.sh@21 -- # val= 00:10:48.076 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.076 05:08:06 -- accel/accel.sh@21 -- # val=copy 00:10:48.076 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.076 05:08:06 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.076 05:08:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:48.076 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.076 05:08:06 -- accel/accel.sh@21 -- # val= 00:10:48.076 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.076 05:08:06 -- accel/accel.sh@21 -- # val=software 00:10:48.076 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.076 05:08:06 -- accel/accel.sh@23 -- # accel_module=software 00:10:48.076 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.077 05:08:06 -- accel/accel.sh@21 -- # val=32 00:10:48.077 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.077 05:08:06 -- accel/accel.sh@21 -- # val=32 00:10:48.077 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.077 05:08:06 -- accel/accel.sh@21 -- # val=1 00:10:48.077 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.077 05:08:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:48.077 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.077 05:08:06 -- accel/accel.sh@21 -- # val=Yes 00:10:48.077 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.077 05:08:06 -- accel/accel.sh@21 -- # val= 00:10:48.077 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.077 05:08:06 -- accel/accel.sh@21 -- # val= 00:10:48.077 05:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.077 05:08:06 -- accel/accel.sh@20 -- # read -r var val 00:10:49.982 05:08:08 -- accel/accel.sh@21 -- # val= 00:10:49.982 05:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # IFS=: 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # read -r var val 00:10:49.982 05:08:08 -- accel/accel.sh@21 -- # val= 00:10:49.982 05:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # IFS=: 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # read -r var val 00:10:49.982 05:08:08 -- accel/accel.sh@21 -- # val= 00:10:49.982 05:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # IFS=: 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # read -r var val 00:10:49.982 05:08:08 -- accel/accel.sh@21 -- # val= 00:10:49.982 05:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # IFS=: 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # read -r var val 00:10:49.982 05:08:08 -- accel/accel.sh@21 -- # val= 00:10:49.982 05:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # IFS=: 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # read -r var val 00:10:49.982 05:08:08 -- accel/accel.sh@21 -- # val= 00:10:49.982 05:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # IFS=: 00:10:49.982 05:08:08 -- accel/accel.sh@20 -- # read -r var val 00:10:49.982 05:08:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:49.982 05:08:08 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:49.982 05:08:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:49.982 00:10:49.982 real 0m4.609s 00:10:49.982 user 0m4.112s 00:10:49.982 sys 0m0.314s 00:10:49.982 05:08:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.982 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:10:49.982 ************************************ 00:10:49.982 END TEST accel_copy 00:10:49.982 ************************************ 00:10:49.982 05:08:08 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:49.982 05:08:08 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:49.982 05:08:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:49.982 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:10:49.982 ************************************ 00:10:49.982 START TEST accel_fill 00:10:49.982 ************************************ 00:10:49.982 05:08:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:49.982 05:08:08 -- accel/accel.sh@16 -- # local accel_opc 00:10:49.982 05:08:08 -- accel/accel.sh@17 -- # local accel_module 00:10:49.982 05:08:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:49.982 05:08:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:49.982 05:08:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:49.982 05:08:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:49.982 05:08:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.982 05:08:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.982 05:08:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:49.982 05:08:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:49.982 05:08:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:49.982 05:08:08 -- accel/accel.sh@42 -- # jq -r . 00:10:49.982 [2024-07-26 05:08:08.807898] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:49.982 [2024-07-26 05:08:08.808081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63798 ] 00:10:49.982 [2024-07-26 05:08:08.958578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.241 [2024-07-26 05:08:09.131883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.146 05:08:11 -- accel/accel.sh@18 -- # out=' 00:10:52.146 SPDK Configuration: 00:10:52.146 Core mask: 0x1 00:10:52.146 00:10:52.146 Accel Perf Configuration: 00:10:52.146 Workload Type: fill 00:10:52.146 Fill pattern: 0x80 00:10:52.146 Transfer size: 4096 bytes 00:10:52.146 Vector count 1 00:10:52.146 Module: software 00:10:52.146 Queue depth: 64 00:10:52.146 Allocate depth: 64 00:10:52.146 # threads/core: 1 00:10:52.146 Run time: 1 seconds 00:10:52.146 Verify: Yes 00:10:52.146 00:10:52.146 Running for 1 seconds... 00:10:52.146 00:10:52.146 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:52.146 ------------------------------------------------------------------------------------ 00:10:52.146 0,0 424960/s 1660 MiB/s 0 0 00:10:52.146 ==================================================================================== 00:10:52.146 Total 424960/s 1660 MiB/s 0 0' 00:10:52.146 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.146 05:08:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:52.146 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.146 05:08:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:52.146 05:08:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.146 05:08:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.146 05:08:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.146 05:08:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.146 05:08:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.146 05:08:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.146 05:08:11 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.146 05:08:11 -- accel/accel.sh@42 -- # jq -r . 00:10:52.146 [2024-07-26 05:08:11.093261] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:52.146 [2024-07-26 05:08:11.093425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63830 ] 00:10:52.405 [2024-07-26 05:08:11.267235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.405 [2024-07-26 05:08:11.431679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val= 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val= 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val=0x1 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val= 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val= 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val=fill 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val=0x80 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val= 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val=software 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@23 -- # accel_module=software 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val=64 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val=64 00:10:52.664 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.664 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.664 05:08:11 -- accel/accel.sh@21 -- # val=1 00:10:52.665 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.665 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.665 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.665 05:08:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:52.665 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.665 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.665 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.665 05:08:11 -- accel/accel.sh@21 -- # val=Yes 00:10:52.665 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.665 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.665 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.665 05:08:11 -- accel/accel.sh@21 -- # val= 00:10:52.665 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.665 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.665 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.665 05:08:11 -- accel/accel.sh@21 -- # val= 00:10:52.665 05:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.665 05:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.665 05:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:54.612 05:08:13 -- accel/accel.sh@21 -- # val= 00:10:54.612 05:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.612 05:08:13 -- accel/accel.sh@21 -- # val= 00:10:54.612 05:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.612 05:08:13 -- accel/accel.sh@21 -- # val= 00:10:54.612 05:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.612 05:08:13 -- accel/accel.sh@21 -- # val= 00:10:54.612 05:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.612 05:08:13 -- accel/accel.sh@21 -- # val= 00:10:54.612 05:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.612 05:08:13 -- accel/accel.sh@21 -- # val= 00:10:54.612 05:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.612 05:08:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.612 05:08:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:54.612 05:08:13 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:54.612 05:08:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:54.612 00:10:54.612 real 0m4.591s 00:10:54.612 user 0m4.102s 00:10:54.612 sys 0m0.302s 00:10:54.612 05:08:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.612 05:08:13 -- common/autotest_common.sh@10 -- # set +x 00:10:54.612 ************************************ 00:10:54.612 END TEST accel_fill 00:10:54.612 ************************************ 00:10:54.612 05:08:13 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:54.612 05:08:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:54.612 05:08:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:54.612 05:08:13 -- common/autotest_common.sh@10 -- # set +x 00:10:54.612 ************************************ 00:10:54.612 START TEST accel_copy_crc32c 00:10:54.612 ************************************ 00:10:54.612 05:08:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:54.612 05:08:13 -- accel/accel.sh@16 -- # local accel_opc 00:10:54.612 05:08:13 -- accel/accel.sh@17 -- # local accel_module 00:10:54.612 05:08:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:54.612 05:08:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:54.612 05:08:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.612 05:08:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.612 05:08:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.612 05:08:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.612 05:08:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.612 05:08:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.612 05:08:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.612 05:08:13 -- accel/accel.sh@42 -- # jq -r . 00:10:54.612 [2024-07-26 05:08:13.453384] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:54.612 [2024-07-26 05:08:13.453544] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63871 ] 00:10:54.612 [2024-07-26 05:08:13.624738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.871 [2024-07-26 05:08:13.796858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.775 05:08:15 -- accel/accel.sh@18 -- # out=' 00:10:56.775 SPDK Configuration: 00:10:56.775 Core mask: 0x1 00:10:56.775 00:10:56.775 Accel Perf Configuration: 00:10:56.775 Workload Type: copy_crc32c 00:10:56.775 CRC-32C seed: 0 00:10:56.775 Vector size: 4096 bytes 00:10:56.775 Transfer size: 4096 bytes 00:10:56.775 Vector count 1 00:10:56.775 Module: software 00:10:56.775 Queue depth: 32 00:10:56.775 Allocate depth: 32 00:10:56.775 # threads/core: 1 00:10:56.775 Run time: 1 seconds 00:10:56.775 Verify: Yes 00:10:56.775 00:10:56.775 Running for 1 seconds... 00:10:56.775 00:10:56.775 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:56.775 ------------------------------------------------------------------------------------ 00:10:56.775 0,0 224704/s 877 MiB/s 0 0 00:10:56.775 ==================================================================================== 00:10:56.775 Total 224704/s 877 MiB/s 0 0' 00:10:56.775 05:08:15 -- accel/accel.sh@20 -- # IFS=: 00:10:56.775 05:08:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:56.775 05:08:15 -- accel/accel.sh@20 -- # read -r var val 00:10:56.775 05:08:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:56.775 05:08:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:56.775 05:08:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:56.775 05:08:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:56.775 05:08:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:56.775 05:08:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:56.775 05:08:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:56.775 05:08:15 -- accel/accel.sh@41 -- # local IFS=, 00:10:56.775 05:08:15 -- accel/accel.sh@42 -- # jq -r . 00:10:56.775 [2024-07-26 05:08:15.754980] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:56.776 [2024-07-26 05:08:15.755173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63897 ] 00:10:57.035 [2024-07-26 05:08:15.921395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.035 [2024-07-26 05:08:16.090846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val= 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val= 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val=0x1 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val= 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val= 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val=0 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val= 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val=software 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@23 -- # accel_module=software 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val=32 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val=32 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val=1 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val=Yes 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val= 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.295 05:08:16 -- accel/accel.sh@21 -- # val= 00:10:57.295 05:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.295 05:08:16 -- accel/accel.sh@20 -- # read -r var val 00:10:59.201 05:08:18 -- accel/accel.sh@21 -- # val= 00:10:59.201 05:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.201 05:08:18 -- accel/accel.sh@21 -- # val= 00:10:59.201 05:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.201 05:08:18 -- accel/accel.sh@21 -- # val= 00:10:59.201 05:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.201 05:08:18 -- accel/accel.sh@21 -- # val= 00:10:59.201 05:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.201 05:08:18 -- accel/accel.sh@21 -- # val= 00:10:59.201 05:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.201 05:08:18 -- accel/accel.sh@21 -- # val= 00:10:59.201 05:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.201 05:08:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.201 05:08:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:59.201 05:08:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:59.201 05:08:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:59.201 00:10:59.201 real 0m4.608s 00:10:59.201 user 0m4.103s 00:10:59.201 sys 0m0.336s 00:10:59.201 05:08:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.201 05:08:18 -- common/autotest_common.sh@10 -- # set +x 00:10:59.201 ************************************ 00:10:59.201 END TEST accel_copy_crc32c 00:10:59.201 ************************************ 00:10:59.201 05:08:18 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:59.201 05:08:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:59.201 05:08:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:59.201 05:08:18 -- common/autotest_common.sh@10 -- # set +x 00:10:59.201 ************************************ 00:10:59.201 START TEST accel_copy_crc32c_C2 00:10:59.201 ************************************ 00:10:59.201 05:08:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:59.201 05:08:18 -- accel/accel.sh@16 -- # local accel_opc 00:10:59.201 05:08:18 -- accel/accel.sh@17 -- # local accel_module 00:10:59.201 05:08:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:59.201 05:08:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:59.201 05:08:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:59.201 05:08:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:59.201 05:08:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:59.201 05:08:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:59.201 05:08:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:59.201 05:08:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:59.201 05:08:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:59.201 05:08:18 -- accel/accel.sh@42 -- # jq -r . 00:10:59.201 [2024-07-26 05:08:18.119238] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:59.201 [2024-07-26 05:08:18.119455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63938 ] 00:10:59.201 [2024-07-26 05:08:18.291475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.464 [2024-07-26 05:08:18.480655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.394 05:08:20 -- accel/accel.sh@18 -- # out=' 00:11:01.394 SPDK Configuration: 00:11:01.394 Core mask: 0x1 00:11:01.394 00:11:01.394 Accel Perf Configuration: 00:11:01.394 Workload Type: copy_crc32c 00:11:01.394 CRC-32C seed: 0 00:11:01.394 Vector size: 4096 bytes 00:11:01.394 Transfer size: 8192 bytes 00:11:01.394 Vector count 2 00:11:01.394 Module: software 00:11:01.394 Queue depth: 32 00:11:01.394 Allocate depth: 32 00:11:01.394 # threads/core: 1 00:11:01.394 Run time: 1 seconds 00:11:01.394 Verify: Yes 00:11:01.394 00:11:01.394 Running for 1 seconds... 00:11:01.394 00:11:01.394 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:01.394 ------------------------------------------------------------------------------------ 00:11:01.394 0,0 152192/s 1189 MiB/s 0 0 00:11:01.394 ==================================================================================== 00:11:01.394 Total 152192/s 594 MiB/s 0 0' 00:11:01.394 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.394 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.394 05:08:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:01.394 05:08:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:01.394 05:08:20 -- accel/accel.sh@12 -- # build_accel_config 00:11:01.394 05:08:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:01.394 05:08:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:01.394 05:08:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:01.394 05:08:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:01.394 05:08:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:01.394 05:08:20 -- accel/accel.sh@41 -- # local IFS=, 00:11:01.394 05:08:20 -- accel/accel.sh@42 -- # jq -r . 00:11:01.394 [2024-07-26 05:08:20.483636] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:01.394 [2024-07-26 05:08:20.483800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63975 ] 00:11:01.653 [2024-07-26 05:08:20.652570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.912 [2024-07-26 05:08:20.821009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val= 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val= 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val=0x1 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val= 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val= 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val=copy_crc32c 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val=0 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val='8192 bytes' 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val= 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val=software 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@23 -- # accel_module=software 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val=32 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val=32 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val=1 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val=Yes 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val= 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.912 05:08:20 -- accel/accel.sh@21 -- # val= 00:11:01.912 05:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.912 05:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:03.816 05:08:22 -- accel/accel.sh@21 -- # val= 00:11:03.816 05:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # IFS=: 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # read -r var val 00:11:03.816 05:08:22 -- accel/accel.sh@21 -- # val= 00:11:03.816 05:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # IFS=: 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # read -r var val 00:11:03.816 05:08:22 -- accel/accel.sh@21 -- # val= 00:11:03.816 05:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # IFS=: 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # read -r var val 00:11:03.816 05:08:22 -- accel/accel.sh@21 -- # val= 00:11:03.816 05:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # IFS=: 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # read -r var val 00:11:03.816 05:08:22 -- accel/accel.sh@21 -- # val= 00:11:03.816 05:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # IFS=: 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # read -r var val 00:11:03.816 05:08:22 -- accel/accel.sh@21 -- # val= 00:11:03.816 05:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # IFS=: 00:11:03.816 05:08:22 -- accel/accel.sh@20 -- # read -r var val 00:11:03.816 05:08:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:03.816 05:08:22 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:11:03.816 05:08:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:03.816 00:11:03.816 real 0m4.684s 00:11:03.816 user 0m4.176s 00:11:03.816 sys 0m0.329s 00:11:03.816 05:08:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.816 05:08:22 -- common/autotest_common.sh@10 -- # set +x 00:11:03.816 ************************************ 00:11:03.816 END TEST accel_copy_crc32c_C2 00:11:03.816 ************************************ 00:11:03.816 05:08:22 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:03.816 05:08:22 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:03.816 05:08:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:03.816 05:08:22 -- common/autotest_common.sh@10 -- # set +x 00:11:03.816 ************************************ 00:11:03.816 START TEST accel_dualcast 00:11:03.816 ************************************ 00:11:03.816 05:08:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:11:03.816 05:08:22 -- accel/accel.sh@16 -- # local accel_opc 00:11:03.816 05:08:22 -- accel/accel.sh@17 -- # local accel_module 00:11:03.816 05:08:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:11:03.816 05:08:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:03.816 05:08:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.816 05:08:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:03.816 05:08:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.816 05:08:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.816 05:08:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:03.816 05:08:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:03.816 05:08:22 -- accel/accel.sh@41 -- # local IFS=, 00:11:03.816 05:08:22 -- accel/accel.sh@42 -- # jq -r . 00:11:03.816 [2024-07-26 05:08:22.846157] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:03.816 [2024-07-26 05:08:22.846312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64016 ] 00:11:04.075 [2024-07-26 05:08:23.016916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.075 [2024-07-26 05:08:23.177240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.607 05:08:25 -- accel/accel.sh@18 -- # out=' 00:11:06.607 SPDK Configuration: 00:11:06.607 Core mask: 0x1 00:11:06.607 00:11:06.607 Accel Perf Configuration: 00:11:06.607 Workload Type: dualcast 00:11:06.607 Transfer size: 4096 bytes 00:11:06.607 Vector count 1 00:11:06.607 Module: software 00:11:06.607 Queue depth: 32 00:11:06.607 Allocate depth: 32 00:11:06.607 # threads/core: 1 00:11:06.607 Run time: 1 seconds 00:11:06.607 Verify: Yes 00:11:06.607 00:11:06.607 Running for 1 seconds... 00:11:06.607 00:11:06.607 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:06.607 ------------------------------------------------------------------------------------ 00:11:06.607 0,0 304384/s 1189 MiB/s 0 0 00:11:06.607 ==================================================================================== 00:11:06.607 Total 304384/s 1189 MiB/s 0 0' 00:11:06.607 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.607 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.607 05:08:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:06.607 05:08:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:06.607 05:08:25 -- accel/accel.sh@12 -- # build_accel_config 00:11:06.607 05:08:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:06.607 05:08:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.607 05:08:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.607 05:08:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:06.607 05:08:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:06.607 05:08:25 -- accel/accel.sh@41 -- # local IFS=, 00:11:06.607 05:08:25 -- accel/accel.sh@42 -- # jq -r . 00:11:06.607 [2024-07-26 05:08:25.152373] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:06.608 [2024-07-26 05:08:25.152527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64042 ] 00:11:06.608 [2024-07-26 05:08:25.319957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.608 [2024-07-26 05:08:25.488825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val= 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val= 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val=0x1 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val= 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val= 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val=dualcast 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val= 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val=software 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@23 -- # accel_module=software 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val=32 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val=32 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val=1 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val=Yes 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val= 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.608 05:08:25 -- accel/accel.sh@21 -- # val= 00:11:06.608 05:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.608 05:08:25 -- accel/accel.sh@20 -- # read -r var val 00:11:08.511 05:08:27 -- accel/accel.sh@21 -- # val= 00:11:08.511 05:08:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # IFS=: 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # read -r var val 00:11:08.511 05:08:27 -- accel/accel.sh@21 -- # val= 00:11:08.511 05:08:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # IFS=: 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # read -r var val 00:11:08.511 05:08:27 -- accel/accel.sh@21 -- # val= 00:11:08.511 05:08:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # IFS=: 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # read -r var val 00:11:08.511 05:08:27 -- accel/accel.sh@21 -- # val= 00:11:08.511 05:08:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # IFS=: 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # read -r var val 00:11:08.511 05:08:27 -- accel/accel.sh@21 -- # val= 00:11:08.511 05:08:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # IFS=: 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # read -r var val 00:11:08.511 05:08:27 -- accel/accel.sh@21 -- # val= 00:11:08.511 05:08:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # IFS=: 00:11:08.511 05:08:27 -- accel/accel.sh@20 -- # read -r var val 00:11:08.511 05:08:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:08.511 05:08:27 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:11:08.511 05:08:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:08.511 00:11:08.511 real 0m4.612s 00:11:08.511 user 0m4.074s 00:11:08.511 sys 0m0.367s 00:11:08.511 05:08:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.511 05:08:27 -- common/autotest_common.sh@10 -- # set +x 00:11:08.511 ************************************ 00:11:08.511 END TEST accel_dualcast 00:11:08.511 ************************************ 00:11:08.511 05:08:27 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:08.511 05:08:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:08.511 05:08:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:08.511 05:08:27 -- common/autotest_common.sh@10 -- # set +x 00:11:08.511 ************************************ 00:11:08.511 START TEST accel_compare 00:11:08.511 ************************************ 00:11:08.511 05:08:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:11:08.511 05:08:27 -- accel/accel.sh@16 -- # local accel_opc 00:11:08.511 05:08:27 -- accel/accel.sh@17 -- # local accel_module 00:11:08.511 05:08:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:11:08.511 05:08:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:08.511 05:08:27 -- accel/accel.sh@12 -- # build_accel_config 00:11:08.511 05:08:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:08.511 05:08:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:08.511 05:08:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:08.511 05:08:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:08.511 05:08:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:08.511 05:08:27 -- accel/accel.sh@41 -- # local IFS=, 00:11:08.511 05:08:27 -- accel/accel.sh@42 -- # jq -r . 00:11:08.511 [2024-07-26 05:08:27.508452] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:08.511 [2024-07-26 05:08:27.508622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64089 ] 00:11:08.771 [2024-07-26 05:08:27.673950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.771 [2024-07-26 05:08:27.833756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.698 05:08:29 -- accel/accel.sh@18 -- # out=' 00:11:10.698 SPDK Configuration: 00:11:10.698 Core mask: 0x1 00:11:10.698 00:11:10.698 Accel Perf Configuration: 00:11:10.698 Workload Type: compare 00:11:10.698 Transfer size: 4096 bytes 00:11:10.698 Vector count 1 00:11:10.698 Module: software 00:11:10.698 Queue depth: 32 00:11:10.698 Allocate depth: 32 00:11:10.698 # threads/core: 1 00:11:10.698 Run time: 1 seconds 00:11:10.698 Verify: Yes 00:11:10.698 00:11:10.698 Running for 1 seconds... 00:11:10.698 00:11:10.698 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:10.698 ------------------------------------------------------------------------------------ 00:11:10.698 0,0 406720/s 1588 MiB/s 0 0 00:11:10.698 ==================================================================================== 00:11:10.698 Total 406720/s 1588 MiB/s 0 0' 00:11:10.698 05:08:29 -- accel/accel.sh@20 -- # IFS=: 00:11:10.698 05:08:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:10.698 05:08:29 -- accel/accel.sh@20 -- # read -r var val 00:11:10.698 05:08:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:10.698 05:08:29 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.698 05:08:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.699 05:08:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.699 05:08:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.699 05:08:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.699 05:08:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.699 05:08:29 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.699 05:08:29 -- accel/accel.sh@42 -- # jq -r . 00:11:10.957 [2024-07-26 05:08:29.813450] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:10.957 [2024-07-26 05:08:29.813632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64115 ] 00:11:10.957 [2024-07-26 05:08:29.982882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.215 [2024-07-26 05:08:30.150490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.215 05:08:30 -- accel/accel.sh@21 -- # val= 00:11:11.215 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.215 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.215 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.215 05:08:30 -- accel/accel.sh@21 -- # val= 00:11:11.215 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.215 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.215 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.215 05:08:30 -- accel/accel.sh@21 -- # val=0x1 00:11:11.215 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.215 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.215 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.215 05:08:30 -- accel/accel.sh@21 -- # val= 00:11:11.215 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.215 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.215 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.215 05:08:30 -- accel/accel.sh@21 -- # val= 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.474 05:08:30 -- accel/accel.sh@21 -- # val=compare 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@24 -- # accel_opc=compare 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.474 05:08:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.474 05:08:30 -- accel/accel.sh@21 -- # val= 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.474 05:08:30 -- accel/accel.sh@21 -- # val=software 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@23 -- # accel_module=software 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.474 05:08:30 -- accel/accel.sh@21 -- # val=32 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.474 05:08:30 -- accel/accel.sh@21 -- # val=32 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.474 05:08:30 -- accel/accel.sh@21 -- # val=1 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.474 05:08:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.474 05:08:30 -- accel/accel.sh@21 -- # val=Yes 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.474 05:08:30 -- accel/accel.sh@21 -- # val= 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.474 05:08:30 -- accel/accel.sh@21 -- # val= 00:11:11.474 05:08:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.474 05:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:13.377 05:08:32 -- accel/accel.sh@21 -- # val= 00:11:13.377 05:08:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # IFS=: 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # read -r var val 00:11:13.377 05:08:32 -- accel/accel.sh@21 -- # val= 00:11:13.377 05:08:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # IFS=: 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # read -r var val 00:11:13.377 05:08:32 -- accel/accel.sh@21 -- # val= 00:11:13.377 05:08:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # IFS=: 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # read -r var val 00:11:13.377 05:08:32 -- accel/accel.sh@21 -- # val= 00:11:13.377 05:08:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # IFS=: 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # read -r var val 00:11:13.377 05:08:32 -- accel/accel.sh@21 -- # val= 00:11:13.377 05:08:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # IFS=: 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # read -r var val 00:11:13.377 05:08:32 -- accel/accel.sh@21 -- # val= 00:11:13.377 05:08:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # IFS=: 00:11:13.377 05:08:32 -- accel/accel.sh@20 -- # read -r var val 00:11:13.377 ************************************ 00:11:13.377 END TEST accel_compare 00:11:13.377 ************************************ 00:11:13.377 05:08:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:13.377 05:08:32 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:11:13.377 05:08:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:13.377 00:11:13.377 real 0m4.633s 00:11:13.377 user 0m4.147s 00:11:13.377 sys 0m0.309s 00:11:13.377 05:08:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.377 05:08:32 -- common/autotest_common.sh@10 -- # set +x 00:11:13.377 05:08:32 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:13.377 05:08:32 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:13.377 05:08:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:13.377 05:08:32 -- common/autotest_common.sh@10 -- # set +x 00:11:13.377 ************************************ 00:11:13.377 START TEST accel_xor 00:11:13.377 ************************************ 00:11:13.377 05:08:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:11:13.377 05:08:32 -- accel/accel.sh@16 -- # local accel_opc 00:11:13.377 05:08:32 -- accel/accel.sh@17 -- # local accel_module 00:11:13.377 05:08:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:11:13.377 05:08:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:13.377 05:08:32 -- accel/accel.sh@12 -- # build_accel_config 00:11:13.377 05:08:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:13.377 05:08:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:13.377 05:08:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:13.377 05:08:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:13.377 05:08:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:13.377 05:08:32 -- accel/accel.sh@41 -- # local IFS=, 00:11:13.377 05:08:32 -- accel/accel.sh@42 -- # jq -r . 00:11:13.377 [2024-07-26 05:08:32.204057] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:13.377 [2024-07-26 05:08:32.204223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64156 ] 00:11:13.377 [2024-07-26 05:08:32.374513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.636 [2024-07-26 05:08:32.556184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.539 05:08:34 -- accel/accel.sh@18 -- # out=' 00:11:15.539 SPDK Configuration: 00:11:15.539 Core mask: 0x1 00:11:15.539 00:11:15.539 Accel Perf Configuration: 00:11:15.539 Workload Type: xor 00:11:15.539 Source buffers: 2 00:11:15.539 Transfer size: 4096 bytes 00:11:15.539 Vector count 1 00:11:15.539 Module: software 00:11:15.539 Queue depth: 32 00:11:15.539 Allocate depth: 32 00:11:15.539 # threads/core: 1 00:11:15.539 Run time: 1 seconds 00:11:15.539 Verify: Yes 00:11:15.539 00:11:15.539 Running for 1 seconds... 00:11:15.539 00:11:15.539 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:15.539 ------------------------------------------------------------------------------------ 00:11:15.539 0,0 210048/s 820 MiB/s 0 0 00:11:15.539 ==================================================================================== 00:11:15.539 Total 210048/s 820 MiB/s 0 0' 00:11:15.539 05:08:34 -- accel/accel.sh@20 -- # IFS=: 00:11:15.539 05:08:34 -- accel/accel.sh@20 -- # read -r var val 00:11:15.539 05:08:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:15.539 05:08:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:15.539 05:08:34 -- accel/accel.sh@12 -- # build_accel_config 00:11:15.539 05:08:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:15.539 05:08:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.539 05:08:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.539 05:08:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:15.539 05:08:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:15.539 05:08:34 -- accel/accel.sh@41 -- # local IFS=, 00:11:15.539 05:08:34 -- accel/accel.sh@42 -- # jq -r . 00:11:15.539 [2024-07-26 05:08:34.535755] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:15.539 [2024-07-26 05:08:34.535925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64182 ] 00:11:15.798 [2024-07-26 05:08:34.708133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.798 [2024-07-26 05:08:34.896408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val= 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val= 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val=0x1 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val= 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val= 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val=xor 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val=2 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val= 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val=software 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@23 -- # accel_module=software 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val=32 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val=32 00:11:16.057 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.057 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.057 05:08:35 -- accel/accel.sh@21 -- # val=1 00:11:16.058 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.058 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.058 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.058 05:08:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:16.058 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.058 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.058 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.058 05:08:35 -- accel/accel.sh@21 -- # val=Yes 00:11:16.058 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.058 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.058 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.058 05:08:35 -- accel/accel.sh@21 -- # val= 00:11:16.058 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.058 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.058 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.058 05:08:35 -- accel/accel.sh@21 -- # val= 00:11:16.058 05:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.058 05:08:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.058 05:08:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.963 05:08:36 -- accel/accel.sh@21 -- # val= 00:11:17.963 05:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.963 05:08:36 -- accel/accel.sh@20 -- # IFS=: 00:11:17.964 05:08:36 -- accel/accel.sh@20 -- # read -r var val 00:11:17.964 05:08:36 -- accel/accel.sh@21 -- # val= 00:11:17.964 05:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.964 05:08:36 -- accel/accel.sh@20 -- # IFS=: 00:11:17.964 05:08:36 -- accel/accel.sh@20 -- # read -r var val 00:11:17.964 05:08:36 -- accel/accel.sh@21 -- # val= 00:11:17.964 05:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.964 05:08:36 -- accel/accel.sh@20 -- # IFS=: 00:11:17.964 05:08:36 -- accel/accel.sh@20 -- # read -r var val 00:11:17.964 05:08:36 -- accel/accel.sh@21 -- # val= 00:11:17.964 05:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.964 05:08:36 -- accel/accel.sh@20 -- # IFS=: 00:11:17.964 05:08:36 -- accel/accel.sh@20 -- # read -r var val 00:11:17.964 05:08:36 -- accel/accel.sh@21 -- # val= 00:11:17.964 05:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.964 05:08:36 -- accel/accel.sh@20 -- # IFS=: 00:11:17.964 05:08:36 -- accel/accel.sh@20 -- # read -r var val 00:11:17.964 05:08:36 -- accel/accel.sh@21 -- # val= 00:11:17.964 05:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.964 05:08:36 -- accel/accel.sh@20 -- # IFS=: 00:11:17.964 05:08:36 -- accel/accel.sh@20 -- # read -r var val 00:11:17.964 05:08:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:17.964 05:08:36 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:17.964 ************************************ 00:11:17.964 END TEST accel_xor 00:11:17.964 05:08:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:17.964 00:11:17.964 real 0m4.727s 00:11:17.964 user 0m4.207s 00:11:17.964 sys 0m0.333s 00:11:17.964 05:08:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.964 05:08:36 -- common/autotest_common.sh@10 -- # set +x 00:11:17.964 ************************************ 00:11:17.964 05:08:36 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:17.964 05:08:36 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:17.964 05:08:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:17.964 05:08:36 -- common/autotest_common.sh@10 -- # set +x 00:11:17.964 ************************************ 00:11:17.964 START TEST accel_xor 00:11:17.964 ************************************ 00:11:17.964 05:08:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:11:17.964 05:08:36 -- accel/accel.sh@16 -- # local accel_opc 00:11:17.964 05:08:36 -- accel/accel.sh@17 -- # local accel_module 00:11:17.964 05:08:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:11:17.964 05:08:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:17.964 05:08:36 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.964 05:08:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.964 05:08:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.964 05:08:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.964 05:08:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.964 05:08:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.964 05:08:36 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.964 05:08:36 -- accel/accel.sh@42 -- # jq -r . 00:11:17.964 [2024-07-26 05:08:36.978899] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:17.964 [2024-07-26 05:08:36.979230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64234 ] 00:11:18.223 [2024-07-26 05:08:37.140321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.223 [2024-07-26 05:08:37.322154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.758 05:08:39 -- accel/accel.sh@18 -- # out=' 00:11:20.758 SPDK Configuration: 00:11:20.758 Core mask: 0x1 00:11:20.758 00:11:20.758 Accel Perf Configuration: 00:11:20.758 Workload Type: xor 00:11:20.758 Source buffers: 3 00:11:20.758 Transfer size: 4096 bytes 00:11:20.758 Vector count 1 00:11:20.758 Module: software 00:11:20.758 Queue depth: 32 00:11:20.758 Allocate depth: 32 00:11:20.758 # threads/core: 1 00:11:20.758 Run time: 1 seconds 00:11:20.758 Verify: Yes 00:11:20.758 00:11:20.758 Running for 1 seconds... 00:11:20.758 00:11:20.758 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:20.758 ------------------------------------------------------------------------------------ 00:11:20.758 0,0 201056/s 785 MiB/s 0 0 00:11:20.758 ==================================================================================== 00:11:20.758 Total 201056/s 785 MiB/s 0 0' 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:20.758 05:08:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:20.758 05:08:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:20.758 05:08:39 -- accel/accel.sh@12 -- # build_accel_config 00:11:20.758 05:08:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:20.758 05:08:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:20.758 05:08:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:20.758 05:08:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:20.758 05:08:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:20.758 05:08:39 -- accel/accel.sh@41 -- # local IFS=, 00:11:20.758 05:08:39 -- accel/accel.sh@42 -- # jq -r . 00:11:20.758 [2024-07-26 05:08:39.345903] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:20.758 [2024-07-26 05:08:39.346248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64260 ] 00:11:20.758 [2024-07-26 05:08:39.518908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.758 [2024-07-26 05:08:39.695268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.758 05:08:39 -- accel/accel.sh@21 -- # val= 00:11:20.758 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:20.758 05:08:39 -- accel/accel.sh@21 -- # val= 00:11:20.758 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:20.758 05:08:39 -- accel/accel.sh@21 -- # val=0x1 00:11:20.758 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:20.758 05:08:39 -- accel/accel.sh@21 -- # val= 00:11:20.758 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:20.758 05:08:39 -- accel/accel.sh@21 -- # val= 00:11:20.758 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:20.758 05:08:39 -- accel/accel.sh@21 -- # val=xor 00:11:20.758 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.758 05:08:39 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:20.758 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:20.758 05:08:39 -- accel/accel.sh@21 -- # val=3 00:11:21.017 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:21.017 05:08:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:21.017 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:21.017 05:08:39 -- accel/accel.sh@21 -- # val= 00:11:21.017 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:21.017 05:08:39 -- accel/accel.sh@21 -- # val=software 00:11:21.017 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.017 05:08:39 -- accel/accel.sh@23 -- # accel_module=software 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:21.017 05:08:39 -- accel/accel.sh@21 -- # val=32 00:11:21.017 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:21.017 05:08:39 -- accel/accel.sh@21 -- # val=32 00:11:21.017 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:21.017 05:08:39 -- accel/accel.sh@21 -- # val=1 00:11:21.017 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:21.017 05:08:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:21.017 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:21.017 05:08:39 -- accel/accel.sh@21 -- # val=Yes 00:11:21.017 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:21.017 05:08:39 -- accel/accel.sh@21 -- # val= 00:11:21.017 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:21.017 05:08:39 -- accel/accel.sh@21 -- # val= 00:11:21.017 05:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # IFS=: 00:11:21.017 05:08:39 -- accel/accel.sh@20 -- # read -r var val 00:11:22.922 05:08:41 -- accel/accel.sh@21 -- # val= 00:11:22.922 05:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # IFS=: 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # read -r var val 00:11:22.922 05:08:41 -- accel/accel.sh@21 -- # val= 00:11:22.922 05:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # IFS=: 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # read -r var val 00:11:22.922 05:08:41 -- accel/accel.sh@21 -- # val= 00:11:22.922 05:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # IFS=: 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # read -r var val 00:11:22.922 05:08:41 -- accel/accel.sh@21 -- # val= 00:11:22.922 05:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # IFS=: 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # read -r var val 00:11:22.922 05:08:41 -- accel/accel.sh@21 -- # val= 00:11:22.922 05:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # IFS=: 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # read -r var val 00:11:22.922 05:08:41 -- accel/accel.sh@21 -- # val= 00:11:22.922 05:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # IFS=: 00:11:22.922 05:08:41 -- accel/accel.sh@20 -- # read -r var val 00:11:22.922 05:08:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:22.922 05:08:41 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:22.922 ************************************ 00:11:22.922 END TEST accel_xor 00:11:22.922 ************************************ 00:11:22.922 05:08:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:22.922 00:11:22.922 real 0m4.744s 00:11:22.922 user 0m4.242s 00:11:22.922 sys 0m0.318s 00:11:22.922 05:08:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.922 05:08:41 -- common/autotest_common.sh@10 -- # set +x 00:11:22.922 05:08:41 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:22.922 05:08:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:22.922 05:08:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.922 05:08:41 -- common/autotest_common.sh@10 -- # set +x 00:11:22.922 ************************************ 00:11:22.922 START TEST accel_dif_verify 00:11:22.922 ************************************ 00:11:22.922 05:08:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:11:22.922 05:08:41 -- accel/accel.sh@16 -- # local accel_opc 00:11:22.922 05:08:41 -- accel/accel.sh@17 -- # local accel_module 00:11:22.922 05:08:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:11:22.922 05:08:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:22.922 05:08:41 -- accel/accel.sh@12 -- # build_accel_config 00:11:22.922 05:08:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:22.922 05:08:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.922 05:08:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.922 05:08:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:22.922 05:08:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:22.922 05:08:41 -- accel/accel.sh@41 -- # local IFS=, 00:11:22.922 05:08:41 -- accel/accel.sh@42 -- # jq -r . 00:11:22.922 [2024-07-26 05:08:41.774945] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:22.922 [2024-07-26 05:08:41.775140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64307 ] 00:11:22.922 [2024-07-26 05:08:41.943604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.181 [2024-07-26 05:08:42.115941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.085 05:08:44 -- accel/accel.sh@18 -- # out=' 00:11:25.085 SPDK Configuration: 00:11:25.085 Core mask: 0x1 00:11:25.085 00:11:25.085 Accel Perf Configuration: 00:11:25.085 Workload Type: dif_verify 00:11:25.085 Vector size: 4096 bytes 00:11:25.085 Transfer size: 4096 bytes 00:11:25.085 Block size: 512 bytes 00:11:25.085 Metadata size: 8 bytes 00:11:25.085 Vector count 1 00:11:25.085 Module: software 00:11:25.085 Queue depth: 32 00:11:25.085 Allocate depth: 32 00:11:25.085 # threads/core: 1 00:11:25.085 Run time: 1 seconds 00:11:25.085 Verify: No 00:11:25.085 00:11:25.085 Running for 1 seconds... 00:11:25.085 00:11:25.085 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:25.085 ------------------------------------------------------------------------------------ 00:11:25.085 0,0 96064/s 381 MiB/s 0 0 00:11:25.085 ==================================================================================== 00:11:25.085 Total 96064/s 375 MiB/s 0 0' 00:11:25.085 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.085 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.085 05:08:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:25.085 05:08:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:25.085 05:08:44 -- accel/accel.sh@12 -- # build_accel_config 00:11:25.085 05:08:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:25.085 05:08:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:25.085 05:08:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:25.085 05:08:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:25.085 05:08:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:25.085 05:08:44 -- accel/accel.sh@41 -- # local IFS=, 00:11:25.085 05:08:44 -- accel/accel.sh@42 -- # jq -r . 00:11:25.085 [2024-07-26 05:08:44.134308] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:25.085 [2024-07-26 05:08:44.134462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64333 ] 00:11:25.344 [2024-07-26 05:08:44.304161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.603 [2024-07-26 05:08:44.474947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val= 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val= 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val=0x1 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val= 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val= 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val=dif_verify 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val= 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val=software 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@23 -- # accel_module=software 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val=32 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val=32 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val=1 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val=No 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.603 05:08:44 -- accel/accel.sh@21 -- # val= 00:11:25.603 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.603 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.604 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:25.604 05:08:44 -- accel/accel.sh@21 -- # val= 00:11:25.604 05:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.604 05:08:44 -- accel/accel.sh@20 -- # IFS=: 00:11:25.604 05:08:44 -- accel/accel.sh@20 -- # read -r var val 00:11:27.529 05:08:46 -- accel/accel.sh@21 -- # val= 00:11:27.529 05:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # IFS=: 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # read -r var val 00:11:27.529 05:08:46 -- accel/accel.sh@21 -- # val= 00:11:27.529 05:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # IFS=: 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # read -r var val 00:11:27.529 05:08:46 -- accel/accel.sh@21 -- # val= 00:11:27.529 05:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # IFS=: 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # read -r var val 00:11:27.529 05:08:46 -- accel/accel.sh@21 -- # val= 00:11:27.529 05:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # IFS=: 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # read -r var val 00:11:27.529 05:08:46 -- accel/accel.sh@21 -- # val= 00:11:27.529 05:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # IFS=: 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # read -r var val 00:11:27.529 05:08:46 -- accel/accel.sh@21 -- # val= 00:11:27.529 05:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # IFS=: 00:11:27.529 05:08:46 -- accel/accel.sh@20 -- # read -r var val 00:11:27.529 05:08:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:27.529 05:08:46 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:27.529 05:08:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:27.529 00:11:27.529 real 0m4.726s 00:11:27.529 user 0m4.213s 00:11:27.529 sys 0m0.329s 00:11:27.529 05:08:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.529 ************************************ 00:11:27.529 END TEST accel_dif_verify 00:11:27.529 ************************************ 00:11:27.529 05:08:46 -- common/autotest_common.sh@10 -- # set +x 00:11:27.529 05:08:46 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:27.530 05:08:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:27.530 05:08:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:27.530 05:08:46 -- common/autotest_common.sh@10 -- # set +x 00:11:27.530 ************************************ 00:11:27.530 START TEST accel_dif_generate 00:11:27.530 ************************************ 00:11:27.530 05:08:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:11:27.530 05:08:46 -- accel/accel.sh@16 -- # local accel_opc 00:11:27.530 05:08:46 -- accel/accel.sh@17 -- # local accel_module 00:11:27.530 05:08:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:27.530 05:08:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:27.530 05:08:46 -- accel/accel.sh@12 -- # build_accel_config 00:11:27.530 05:08:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:27.530 05:08:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:27.530 05:08:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:27.530 05:08:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:27.530 05:08:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:27.530 05:08:46 -- accel/accel.sh@41 -- # local IFS=, 00:11:27.530 05:08:46 -- accel/accel.sh@42 -- # jq -r . 00:11:27.530 [2024-07-26 05:08:46.546544] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:27.530 [2024-07-26 05:08:46.546678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64375 ] 00:11:27.789 [2024-07-26 05:08:46.700063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.789 [2024-07-26 05:08:46.885537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.323 05:08:48 -- accel/accel.sh@18 -- # out=' 00:11:30.323 SPDK Configuration: 00:11:30.323 Core mask: 0x1 00:11:30.323 00:11:30.323 Accel Perf Configuration: 00:11:30.323 Workload Type: dif_generate 00:11:30.323 Vector size: 4096 bytes 00:11:30.323 Transfer size: 4096 bytes 00:11:30.323 Block size: 512 bytes 00:11:30.323 Metadata size: 8 bytes 00:11:30.323 Vector count 1 00:11:30.323 Module: software 00:11:30.323 Queue depth: 32 00:11:30.323 Allocate depth: 32 00:11:30.323 # threads/core: 1 00:11:30.323 Run time: 1 seconds 00:11:30.323 Verify: No 00:11:30.323 00:11:30.323 Running for 1 seconds... 00:11:30.323 00:11:30.323 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:30.323 ------------------------------------------------------------------------------------ 00:11:30.323 0,0 116608/s 462 MiB/s 0 0 00:11:30.323 ==================================================================================== 00:11:30.323 Total 116608/s 455 MiB/s 0 0' 00:11:30.323 05:08:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:30.323 05:08:48 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:48 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:30.323 05:08:48 -- accel/accel.sh@12 -- # build_accel_config 00:11:30.323 05:08:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:30.323 05:08:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.323 05:08:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.323 05:08:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:30.323 05:08:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:30.323 05:08:48 -- accel/accel.sh@41 -- # local IFS=, 00:11:30.323 05:08:48 -- accel/accel.sh@42 -- # jq -r . 00:11:30.323 [2024-07-26 05:08:48.885651] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:30.323 [2024-07-26 05:08:48.885784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64407 ] 00:11:30.323 [2024-07-26 05:08:49.042231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.323 [2024-07-26 05:08:49.215738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val= 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val= 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val=0x1 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val= 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val= 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val=dif_generate 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val= 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val=software 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@23 -- # accel_module=software 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val=32 00:11:30.323 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.323 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.323 05:08:49 -- accel/accel.sh@21 -- # val=32 00:11:30.324 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.324 05:08:49 -- accel/accel.sh@21 -- # val=1 00:11:30.324 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.324 05:08:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:30.324 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.324 05:08:49 -- accel/accel.sh@21 -- # val=No 00:11:30.324 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.324 05:08:49 -- accel/accel.sh@21 -- # val= 00:11:30.324 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:30.324 05:08:49 -- accel/accel.sh@21 -- # val= 00:11:30.324 05:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # IFS=: 00:11:30.324 05:08:49 -- accel/accel.sh@20 -- # read -r var val 00:11:32.228 05:08:51 -- accel/accel.sh@21 -- # val= 00:11:32.228 05:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # IFS=: 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # read -r var val 00:11:32.228 05:08:51 -- accel/accel.sh@21 -- # val= 00:11:32.228 05:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # IFS=: 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # read -r var val 00:11:32.228 05:08:51 -- accel/accel.sh@21 -- # val= 00:11:32.228 05:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # IFS=: 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # read -r var val 00:11:32.228 05:08:51 -- accel/accel.sh@21 -- # val= 00:11:32.228 05:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # IFS=: 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # read -r var val 00:11:32.228 05:08:51 -- accel/accel.sh@21 -- # val= 00:11:32.228 05:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # IFS=: 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # read -r var val 00:11:32.228 05:08:51 -- accel/accel.sh@21 -- # val= 00:11:32.228 05:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # IFS=: 00:11:32.228 05:08:51 -- accel/accel.sh@20 -- # read -r var val 00:11:32.228 05:08:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:32.228 05:08:51 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:32.228 05:08:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:32.228 00:11:32.228 real 0m4.676s 00:11:32.228 user 0m4.173s 00:11:32.228 sys 0m0.321s 00:11:32.228 05:08:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.228 05:08:51 -- common/autotest_common.sh@10 -- # set +x 00:11:32.228 ************************************ 00:11:32.228 END TEST accel_dif_generate 00:11:32.228 ************************************ 00:11:32.228 05:08:51 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:32.228 05:08:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:32.228 05:08:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.228 05:08:51 -- common/autotest_common.sh@10 -- # set +x 00:11:32.228 ************************************ 00:11:32.228 START TEST accel_dif_generate_copy 00:11:32.228 ************************************ 00:11:32.228 05:08:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:11:32.228 05:08:51 -- accel/accel.sh@16 -- # local accel_opc 00:11:32.228 05:08:51 -- accel/accel.sh@17 -- # local accel_module 00:11:32.228 05:08:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:32.228 05:08:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:32.228 05:08:51 -- accel/accel.sh@12 -- # build_accel_config 00:11:32.228 05:08:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:32.228 05:08:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:32.228 05:08:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:32.228 05:08:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:32.228 05:08:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:32.228 05:08:51 -- accel/accel.sh@41 -- # local IFS=, 00:11:32.228 05:08:51 -- accel/accel.sh@42 -- # jq -r . 00:11:32.228 [2024-07-26 05:08:51.275768] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:32.228 [2024-07-26 05:08:51.275914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64453 ] 00:11:32.487 [2024-07-26 05:08:51.453178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.746 [2024-07-26 05:08:51.625900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.650 05:08:53 -- accel/accel.sh@18 -- # out=' 00:11:34.650 SPDK Configuration: 00:11:34.650 Core mask: 0x1 00:11:34.650 00:11:34.650 Accel Perf Configuration: 00:11:34.650 Workload Type: dif_generate_copy 00:11:34.650 Vector size: 4096 bytes 00:11:34.650 Transfer size: 4096 bytes 00:11:34.650 Vector count 1 00:11:34.650 Module: software 00:11:34.650 Queue depth: 32 00:11:34.650 Allocate depth: 32 00:11:34.650 # threads/core: 1 00:11:34.650 Run time: 1 seconds 00:11:34.650 Verify: No 00:11:34.650 00:11:34.650 Running for 1 seconds... 00:11:34.650 00:11:34.650 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:34.650 ------------------------------------------------------------------------------------ 00:11:34.650 0,0 84736/s 336 MiB/s 0 0 00:11:34.650 ==================================================================================== 00:11:34.650 Total 84736/s 331 MiB/s 0 0' 00:11:34.650 05:08:53 -- accel/accel.sh@20 -- # IFS=: 00:11:34.650 05:08:53 -- accel/accel.sh@20 -- # read -r var val 00:11:34.650 05:08:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:34.651 05:08:53 -- accel/accel.sh@12 -- # build_accel_config 00:11:34.651 05:08:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:34.651 05:08:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:34.651 05:08:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:34.651 05:08:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:34.651 05:08:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:34.651 05:08:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:34.651 05:08:53 -- accel/accel.sh@41 -- # local IFS=, 00:11:34.651 05:08:53 -- accel/accel.sh@42 -- # jq -r . 00:11:34.651 [2024-07-26 05:08:53.632576] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:34.651 [2024-07-26 05:08:53.632918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64479 ] 00:11:34.909 [2024-07-26 05:08:53.803961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.909 [2024-07-26 05:08:53.979476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.167 05:08:54 -- accel/accel.sh@21 -- # val= 00:11:35.167 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.167 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.167 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.167 05:08:54 -- accel/accel.sh@21 -- # val= 00:11:35.167 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.167 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.167 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.167 05:08:54 -- accel/accel.sh@21 -- # val=0x1 00:11:35.167 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.167 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.167 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.167 05:08:54 -- accel/accel.sh@21 -- # val= 00:11:35.167 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.167 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.167 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.167 05:08:54 -- accel/accel.sh@21 -- # val= 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val= 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val=software 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@23 -- # accel_module=software 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val=32 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val=32 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val=1 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val=No 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val= 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:35.168 05:08:54 -- accel/accel.sh@21 -- # val= 00:11:35.168 05:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # IFS=: 00:11:35.168 05:08:54 -- accel/accel.sh@20 -- # read -r var val 00:11:37.070 05:08:55 -- accel/accel.sh@21 -- # val= 00:11:37.070 05:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # IFS=: 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # read -r var val 00:11:37.070 05:08:55 -- accel/accel.sh@21 -- # val= 00:11:37.070 05:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # IFS=: 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # read -r var val 00:11:37.070 05:08:55 -- accel/accel.sh@21 -- # val= 00:11:37.070 05:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # IFS=: 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # read -r var val 00:11:37.070 05:08:55 -- accel/accel.sh@21 -- # val= 00:11:37.070 05:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # IFS=: 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # read -r var val 00:11:37.070 05:08:55 -- accel/accel.sh@21 -- # val= 00:11:37.070 05:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # IFS=: 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # read -r var val 00:11:37.070 05:08:55 -- accel/accel.sh@21 -- # val= 00:11:37.070 05:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # IFS=: 00:11:37.070 05:08:55 -- accel/accel.sh@20 -- # read -r var val 00:11:37.070 05:08:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:37.070 05:08:55 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:37.070 05:08:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:37.070 00:11:37.070 real 0m4.709s 00:11:37.070 user 0m4.209s 00:11:37.070 sys 0m0.316s 00:11:37.070 05:08:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.070 ************************************ 00:11:37.070 END TEST accel_dif_generate_copy 00:11:37.070 ************************************ 00:11:37.070 05:08:55 -- common/autotest_common.sh@10 -- # set +x 00:11:37.070 05:08:55 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:37.070 05:08:55 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.070 05:08:55 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:37.070 05:08:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:37.070 05:08:55 -- common/autotest_common.sh@10 -- # set +x 00:11:37.070 ************************************ 00:11:37.070 START TEST accel_comp 00:11:37.070 ************************************ 00:11:37.070 05:08:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.070 05:08:55 -- accel/accel.sh@16 -- # local accel_opc 00:11:37.070 05:08:55 -- accel/accel.sh@17 -- # local accel_module 00:11:37.070 05:08:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.070 05:08:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.070 05:08:55 -- accel/accel.sh@12 -- # build_accel_config 00:11:37.070 05:08:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:37.070 05:08:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:37.070 05:08:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:37.070 05:08:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:37.070 05:08:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:37.070 05:08:55 -- accel/accel.sh@41 -- # local IFS=, 00:11:37.070 05:08:55 -- accel/accel.sh@42 -- # jq -r . 00:11:37.070 [2024-07-26 05:08:56.029460] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:37.070 [2024-07-26 05:08:56.029580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64526 ] 00:11:37.329 [2024-07-26 05:08:56.182813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.329 [2024-07-26 05:08:56.351601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.229 05:08:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:39.229 00:11:39.229 SPDK Configuration: 00:11:39.229 Core mask: 0x1 00:11:39.229 00:11:39.229 Accel Perf Configuration: 00:11:39.229 Workload Type: compress 00:11:39.229 Transfer size: 4096 bytes 00:11:39.229 Vector count 1 00:11:39.229 Module: software 00:11:39.229 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:39.229 Queue depth: 32 00:11:39.229 Allocate depth: 32 00:11:39.229 # threads/core: 1 00:11:39.229 Run time: 1 seconds 00:11:39.229 Verify: No 00:11:39.229 00:11:39.229 Running for 1 seconds... 00:11:39.229 00:11:39.229 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:39.229 ------------------------------------------------------------------------------------ 00:11:39.229 0,0 48480/s 202 MiB/s 0 0 00:11:39.229 ==================================================================================== 00:11:39.229 Total 48480/s 189 MiB/s 0 0' 00:11:39.229 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:39.229 05:08:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:39.229 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:39.229 05:08:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:39.229 05:08:58 -- accel/accel.sh@12 -- # build_accel_config 00:11:39.229 05:08:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:39.229 05:08:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:39.229 05:08:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:39.229 05:08:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:39.229 05:08:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:39.229 05:08:58 -- accel/accel.sh@41 -- # local IFS=, 00:11:39.229 05:08:58 -- accel/accel.sh@42 -- # jq -r . 00:11:39.488 [2024-07-26 05:08:58.363886] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:39.488 [2024-07-26 05:08:58.364054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64552 ] 00:11:39.488 [2024-07-26 05:08:58.533400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.747 [2024-07-26 05:08:58.703687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val= 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val= 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val= 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val=0x1 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val= 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val= 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val=compress 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val= 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val=software 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@23 -- # accel_module=software 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val=32 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val=32 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val=1 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val=No 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val= 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:40.006 05:08:58 -- accel/accel.sh@21 -- # val= 00:11:40.006 05:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # IFS=: 00:11:40.006 05:08:58 -- accel/accel.sh@20 -- # read -r var val 00:11:41.909 05:09:00 -- accel/accel.sh@21 -- # val= 00:11:41.909 05:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # IFS=: 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # read -r var val 00:11:41.909 05:09:00 -- accel/accel.sh@21 -- # val= 00:11:41.909 05:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # IFS=: 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # read -r var val 00:11:41.909 05:09:00 -- accel/accel.sh@21 -- # val= 00:11:41.909 05:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # IFS=: 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # read -r var val 00:11:41.909 05:09:00 -- accel/accel.sh@21 -- # val= 00:11:41.909 05:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # IFS=: 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # read -r var val 00:11:41.909 05:09:00 -- accel/accel.sh@21 -- # val= 00:11:41.909 05:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # IFS=: 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # read -r var val 00:11:41.909 05:09:00 -- accel/accel.sh@21 -- # val= 00:11:41.909 05:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # IFS=: 00:11:41.909 05:09:00 -- accel/accel.sh@20 -- # read -r var val 00:11:41.909 ************************************ 00:11:41.909 END TEST accel_comp 00:11:41.909 ************************************ 00:11:41.909 05:09:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:41.909 05:09:00 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:41.909 05:09:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:41.909 00:11:41.909 real 0m4.717s 00:11:41.909 user 0m4.231s 00:11:41.909 sys 0m0.304s 00:11:41.909 05:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.909 05:09:00 -- common/autotest_common.sh@10 -- # set +x 00:11:41.909 05:09:00 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:41.909 05:09:00 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:41.909 05:09:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:41.909 05:09:00 -- common/autotest_common.sh@10 -- # set +x 00:11:41.909 ************************************ 00:11:41.909 START TEST accel_decomp 00:11:41.909 ************************************ 00:11:41.909 05:09:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:41.909 05:09:00 -- accel/accel.sh@16 -- # local accel_opc 00:11:41.909 05:09:00 -- accel/accel.sh@17 -- # local accel_module 00:11:41.909 05:09:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:41.909 05:09:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:41.909 05:09:00 -- accel/accel.sh@12 -- # build_accel_config 00:11:41.909 05:09:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:41.909 05:09:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:41.909 05:09:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:41.909 05:09:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:41.909 05:09:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:41.909 05:09:00 -- accel/accel.sh@41 -- # local IFS=, 00:11:41.909 05:09:00 -- accel/accel.sh@42 -- # jq -r . 00:11:41.909 [2024-07-26 05:09:00.797912] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:41.909 [2024-07-26 05:09:00.798270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64593 ] 00:11:41.909 [2024-07-26 05:09:00.969472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.170 [2024-07-26 05:09:01.169018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.701 05:09:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:44.701 00:11:44.701 SPDK Configuration: 00:11:44.701 Core mask: 0x1 00:11:44.701 00:11:44.701 Accel Perf Configuration: 00:11:44.701 Workload Type: decompress 00:11:44.701 Transfer size: 4096 bytes 00:11:44.701 Vector count 1 00:11:44.701 Module: software 00:11:44.701 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:44.701 Queue depth: 32 00:11:44.701 Allocate depth: 32 00:11:44.701 # threads/core: 1 00:11:44.701 Run time: 1 seconds 00:11:44.701 Verify: Yes 00:11:44.701 00:11:44.701 Running for 1 seconds... 00:11:44.701 00:11:44.701 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:44.701 ------------------------------------------------------------------------------------ 00:11:44.701 0,0 54496/s 100 MiB/s 0 0 00:11:44.701 ==================================================================================== 00:11:44.701 Total 54496/s 212 MiB/s 0 0' 00:11:44.701 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.701 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.701 05:09:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:44.701 05:09:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:44.701 05:09:03 -- accel/accel.sh@12 -- # build_accel_config 00:11:44.701 05:09:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:44.701 05:09:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:44.701 05:09:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:44.701 05:09:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:44.701 05:09:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:44.701 05:09:03 -- accel/accel.sh@41 -- # local IFS=, 00:11:44.701 05:09:03 -- accel/accel.sh@42 -- # jq -r . 00:11:44.701 [2024-07-26 05:09:03.326856] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:44.701 [2024-07-26 05:09:03.327218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64629 ] 00:11:44.701 [2024-07-26 05:09:03.501722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.701 [2024-07-26 05:09:03.701188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val= 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val= 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val= 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val=0x1 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val= 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val= 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val=decompress 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val= 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val=software 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@23 -- # accel_module=software 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val=32 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val=32 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val=1 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val=Yes 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val= 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:44.980 05:09:03 -- accel/accel.sh@21 -- # val= 00:11:44.980 05:09:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # IFS=: 00:11:44.980 05:09:03 -- accel/accel.sh@20 -- # read -r var val 00:11:46.886 05:09:05 -- accel/accel.sh@21 -- # val= 00:11:46.886 05:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # IFS=: 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # read -r var val 00:11:46.886 05:09:05 -- accel/accel.sh@21 -- # val= 00:11:46.886 05:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # IFS=: 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # read -r var val 00:11:46.886 05:09:05 -- accel/accel.sh@21 -- # val= 00:11:46.886 05:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # IFS=: 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # read -r var val 00:11:46.886 05:09:05 -- accel/accel.sh@21 -- # val= 00:11:46.886 05:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # IFS=: 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # read -r var val 00:11:46.886 05:09:05 -- accel/accel.sh@21 -- # val= 00:11:46.886 05:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # IFS=: 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # read -r var val 00:11:46.886 05:09:05 -- accel/accel.sh@21 -- # val= 00:11:46.886 05:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # IFS=: 00:11:46.886 05:09:05 -- accel/accel.sh@20 -- # read -r var val 00:11:46.886 05:09:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:46.886 05:09:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:46.886 ************************************ 00:11:46.886 END TEST accel_decomp 00:11:46.886 ************************************ 00:11:46.886 05:09:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:46.886 00:11:46.886 real 0m5.073s 00:11:46.886 user 0m4.534s 00:11:46.886 sys 0m0.353s 00:11:46.886 05:09:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.886 05:09:05 -- common/autotest_common.sh@10 -- # set +x 00:11:46.886 05:09:05 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:46.886 05:09:05 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:46.886 05:09:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:46.886 05:09:05 -- common/autotest_common.sh@10 -- # set +x 00:11:46.886 ************************************ 00:11:46.886 START TEST accel_decmop_full 00:11:46.886 ************************************ 00:11:46.886 05:09:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:46.886 05:09:05 -- accel/accel.sh@16 -- # local accel_opc 00:11:46.886 05:09:05 -- accel/accel.sh@17 -- # local accel_module 00:11:46.886 05:09:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:46.886 05:09:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:46.886 05:09:05 -- accel/accel.sh@12 -- # build_accel_config 00:11:46.886 05:09:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:46.886 05:09:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:46.886 05:09:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:46.886 05:09:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:46.887 05:09:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:46.887 05:09:05 -- accel/accel.sh@41 -- # local IFS=, 00:11:46.887 05:09:05 -- accel/accel.sh@42 -- # jq -r . 00:11:46.887 [2024-07-26 05:09:05.924610] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:46.887 [2024-07-26 05:09:05.924779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64671 ] 00:11:47.146 [2024-07-26 05:09:06.100947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.405 [2024-07-26 05:09:06.325852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.941 05:09:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:49.941 00:11:49.941 SPDK Configuration: 00:11:49.941 Core mask: 0x1 00:11:49.941 00:11:49.941 Accel Perf Configuration: 00:11:49.941 Workload Type: decompress 00:11:49.941 Transfer size: 111250 bytes 00:11:49.941 Vector count 1 00:11:49.941 Module: software 00:11:49.941 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:49.941 Queue depth: 32 00:11:49.941 Allocate depth: 32 00:11:49.941 # threads/core: 1 00:11:49.941 Run time: 1 seconds 00:11:49.941 Verify: Yes 00:11:49.941 00:11:49.941 Running for 1 seconds... 00:11:49.941 00:11:49.941 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:49.941 ------------------------------------------------------------------------------------ 00:11:49.941 0,0 3904/s 161 MiB/s 0 0 00:11:49.941 ==================================================================================== 00:11:49.941 Total 3904/s 414 MiB/s 0 0' 00:11:49.941 05:09:08 -- accel/accel.sh@20 -- # IFS=: 00:11:49.941 05:09:08 -- accel/accel.sh@20 -- # read -r var val 00:11:49.941 05:09:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:49.941 05:09:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:49.941 05:09:08 -- accel/accel.sh@12 -- # build_accel_config 00:11:49.941 05:09:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:49.941 05:09:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:49.941 05:09:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:49.941 05:09:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:49.941 05:09:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:49.941 05:09:08 -- accel/accel.sh@41 -- # local IFS=, 00:11:49.941 05:09:08 -- accel/accel.sh@42 -- # jq -r . 00:11:49.941 [2024-07-26 05:09:08.506494] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:49.941 [2024-07-26 05:09:08.506682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64703 ] 00:11:49.941 [2024-07-26 05:09:08.681759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.941 [2024-07-26 05:09:08.882561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val= 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val= 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val= 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val=0x1 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val= 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val= 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val=decompress 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val= 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val=software 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@23 -- # accel_module=software 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val=32 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val=32 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val=1 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val=Yes 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val= 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:50.200 05:09:09 -- accel/accel.sh@21 -- # val= 00:11:50.200 05:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:50.200 05:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:52.106 05:09:11 -- accel/accel.sh@21 -- # val= 00:11:52.106 05:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # IFS=: 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # read -r var val 00:11:52.106 05:09:11 -- accel/accel.sh@21 -- # val= 00:11:52.106 05:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # IFS=: 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # read -r var val 00:11:52.106 05:09:11 -- accel/accel.sh@21 -- # val= 00:11:52.106 05:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # IFS=: 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # read -r var val 00:11:52.106 05:09:11 -- accel/accel.sh@21 -- # val= 00:11:52.106 05:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # IFS=: 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # read -r var val 00:11:52.106 05:09:11 -- accel/accel.sh@21 -- # val= 00:11:52.106 05:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # IFS=: 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # read -r var val 00:11:52.106 05:09:11 -- accel/accel.sh@21 -- # val= 00:11:52.106 05:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # IFS=: 00:11:52.106 05:09:11 -- accel/accel.sh@20 -- # read -r var val 00:11:52.106 05:09:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:52.106 05:09:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:52.106 05:09:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:52.106 00:11:52.106 real 0m5.148s 00:11:52.106 user 0m4.601s 00:11:52.106 sys 0m0.363s 00:11:52.106 ************************************ 00:11:52.106 END TEST accel_decmop_full 00:11:52.106 ************************************ 00:11:52.106 05:09:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.106 05:09:11 -- common/autotest_common.sh@10 -- # set +x 00:11:52.106 05:09:11 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:52.106 05:09:11 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:52.106 05:09:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:52.106 05:09:11 -- common/autotest_common.sh@10 -- # set +x 00:11:52.106 ************************************ 00:11:52.106 START TEST accel_decomp_mcore 00:11:52.106 ************************************ 00:11:52.106 05:09:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:52.106 05:09:11 -- accel/accel.sh@16 -- # local accel_opc 00:11:52.106 05:09:11 -- accel/accel.sh@17 -- # local accel_module 00:11:52.106 05:09:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:52.106 05:09:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:52.106 05:09:11 -- accel/accel.sh@12 -- # build_accel_config 00:11:52.106 05:09:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:52.106 05:09:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:52.106 05:09:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:52.106 05:09:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:52.106 05:09:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:52.106 05:09:11 -- accel/accel.sh@41 -- # local IFS=, 00:11:52.106 05:09:11 -- accel/accel.sh@42 -- # jq -r . 00:11:52.106 [2024-07-26 05:09:11.126703] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:52.106 [2024-07-26 05:09:11.126866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64748 ] 00:11:52.368 [2024-07-26 05:09:11.302854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.628 [2024-07-26 05:09:11.509118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.628 [2024-07-26 05:09:11.509290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.628 [2024-07-26 05:09:11.509422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.628 [2024-07-26 05:09:11.509641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.162 05:09:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:55.162 00:11:55.162 SPDK Configuration: 00:11:55.162 Core mask: 0xf 00:11:55.162 00:11:55.162 Accel Perf Configuration: 00:11:55.162 Workload Type: decompress 00:11:55.162 Transfer size: 4096 bytes 00:11:55.162 Vector count 1 00:11:55.162 Module: software 00:11:55.162 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:55.162 Queue depth: 32 00:11:55.162 Allocate depth: 32 00:11:55.162 # threads/core: 1 00:11:55.162 Run time: 1 seconds 00:11:55.162 Verify: Yes 00:11:55.162 00:11:55.162 Running for 1 seconds... 00:11:55.162 00:11:55.162 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:55.162 ------------------------------------------------------------------------------------ 00:11:55.162 0,0 52064/s 95 MiB/s 0 0 00:11:55.162 3,0 50624/s 93 MiB/s 0 0 00:11:55.162 2,0 51712/s 95 MiB/s 0 0 00:11:55.162 1,0 52576/s 96 MiB/s 0 0 00:11:55.162 ==================================================================================== 00:11:55.162 Total 206976/s 808 MiB/s 0 0' 00:11:55.162 05:09:13 -- accel/accel.sh@20 -- # IFS=: 00:11:55.162 05:09:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:55.162 05:09:13 -- accel/accel.sh@20 -- # read -r var val 00:11:55.162 05:09:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:55.162 05:09:13 -- accel/accel.sh@12 -- # build_accel_config 00:11:55.162 05:09:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:55.162 05:09:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:55.162 05:09:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:55.162 05:09:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:55.162 05:09:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:55.162 05:09:13 -- accel/accel.sh@41 -- # local IFS=, 00:11:55.162 05:09:13 -- accel/accel.sh@42 -- # jq -r . 00:11:55.162 [2024-07-26 05:09:13.695623] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:55.162 [2024-07-26 05:09:13.695846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64786 ] 00:11:55.162 [2024-07-26 05:09:13.887706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.162 [2024-07-26 05:09:14.101042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.162 [2024-07-26 05:09:14.101214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.162 [2024-07-26 05:09:14.101337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.162 [2024-07-26 05:09:14.101586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val= 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val= 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val= 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val=0xf 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val= 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val= 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val=decompress 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val= 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val=software 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@23 -- # accel_module=software 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val=32 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val=32 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.421 05:09:14 -- accel/accel.sh@21 -- # val=1 00:11:55.421 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.421 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.422 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.422 05:09:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:55.422 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.422 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.422 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.422 05:09:14 -- accel/accel.sh@21 -- # val=Yes 00:11:55.422 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.422 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.422 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.422 05:09:14 -- accel/accel.sh@21 -- # val= 00:11:55.422 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.422 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.422 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:55.422 05:09:14 -- accel/accel.sh@21 -- # val= 00:11:55.422 05:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.422 05:09:14 -- accel/accel.sh@20 -- # IFS=: 00:11:55.422 05:09:14 -- accel/accel.sh@20 -- # read -r var val 00:11:57.325 05:09:16 -- accel/accel.sh@21 -- # val= 00:11:57.326 05:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # IFS=: 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # read -r var val 00:11:57.326 05:09:16 -- accel/accel.sh@21 -- # val= 00:11:57.326 05:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # IFS=: 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # read -r var val 00:11:57.326 05:09:16 -- accel/accel.sh@21 -- # val= 00:11:57.326 05:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # IFS=: 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # read -r var val 00:11:57.326 05:09:16 -- accel/accel.sh@21 -- # val= 00:11:57.326 05:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # IFS=: 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # read -r var val 00:11:57.326 05:09:16 -- accel/accel.sh@21 -- # val= 00:11:57.326 05:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # IFS=: 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # read -r var val 00:11:57.326 05:09:16 -- accel/accel.sh@21 -- # val= 00:11:57.326 05:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # IFS=: 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # read -r var val 00:11:57.326 05:09:16 -- accel/accel.sh@21 -- # val= 00:11:57.326 05:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # IFS=: 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # read -r var val 00:11:57.326 05:09:16 -- accel/accel.sh@21 -- # val= 00:11:57.326 05:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # IFS=: 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # read -r var val 00:11:57.326 05:09:16 -- accel/accel.sh@21 -- # val= 00:11:57.326 05:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # IFS=: 00:11:57.326 05:09:16 -- accel/accel.sh@20 -- # read -r var val 00:11:57.326 05:09:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:57.326 05:09:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:57.326 05:09:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:57.326 00:11:57.326 real 0m5.159s 00:11:57.326 user 0m7.334s 00:11:57.326 sys 0m0.209s 00:11:57.326 05:09:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.326 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:11:57.326 ************************************ 00:11:57.326 END TEST accel_decomp_mcore 00:11:57.326 ************************************ 00:11:57.326 05:09:16 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:57.326 05:09:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:57.326 05:09:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:57.326 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:11:57.326 ************************************ 00:11:57.326 START TEST accel_decomp_full_mcore 00:11:57.326 ************************************ 00:11:57.326 05:09:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:57.326 05:09:16 -- accel/accel.sh@16 -- # local accel_opc 00:11:57.326 05:09:16 -- accel/accel.sh@17 -- # local accel_module 00:11:57.326 05:09:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:57.326 05:09:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:57.326 05:09:16 -- accel/accel.sh@12 -- # build_accel_config 00:11:57.326 05:09:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:57.326 05:09:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:57.326 05:09:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:57.326 05:09:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:57.326 05:09:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:57.326 05:09:16 -- accel/accel.sh@41 -- # local IFS=, 00:11:57.326 05:09:16 -- accel/accel.sh@42 -- # jq -r . 00:11:57.326 [2024-07-26 05:09:16.337381] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:57.326 [2024-07-26 05:09:16.337530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64830 ] 00:11:57.585 [2024-07-26 05:09:16.506974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.844 [2024-07-26 05:09:16.718932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.844 [2024-07-26 05:09:16.719110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.844 [2024-07-26 05:09:16.719342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.844 [2024-07-26 05:09:16.719452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.372 05:09:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:00.372 00:12:00.372 SPDK Configuration: 00:12:00.372 Core mask: 0xf 00:12:00.372 00:12:00.372 Accel Perf Configuration: 00:12:00.372 Workload Type: decompress 00:12:00.372 Transfer size: 111250 bytes 00:12:00.372 Vector count 1 00:12:00.372 Module: software 00:12:00.372 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:00.372 Queue depth: 32 00:12:00.372 Allocate depth: 32 00:12:00.372 # threads/core: 1 00:12:00.372 Run time: 1 seconds 00:12:00.372 Verify: Yes 00:12:00.372 00:12:00.372 Running for 1 seconds... 00:12:00.372 00:12:00.372 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:00.372 ------------------------------------------------------------------------------------ 00:12:00.372 0,0 4064/s 167 MiB/s 0 0 00:12:00.372 3,0 4096/s 169 MiB/s 0 0 00:12:00.372 2,0 4064/s 167 MiB/s 0 0 00:12:00.372 1,0 4096/s 169 MiB/s 0 0 00:12:00.372 ==================================================================================== 00:12:00.372 Total 16320/s 1731 MiB/s 0 0' 00:12:00.372 05:09:18 -- accel/accel.sh@20 -- # IFS=: 00:12:00.372 05:09:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:00.373 05:09:18 -- accel/accel.sh@20 -- # read -r var val 00:12:00.373 05:09:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:00.373 05:09:18 -- accel/accel.sh@12 -- # build_accel_config 00:12:00.373 05:09:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:00.373 05:09:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:00.373 05:09:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:00.373 05:09:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:00.373 05:09:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:00.373 05:09:18 -- accel/accel.sh@41 -- # local IFS=, 00:12:00.373 05:09:18 -- accel/accel.sh@42 -- # jq -r . 00:12:00.373 [2024-07-26 05:09:18.925730] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:00.373 [2024-07-26 05:09:18.925869] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64869 ] 00:12:00.373 [2024-07-26 05:09:19.089881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.373 [2024-07-26 05:09:19.302074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.373 [2024-07-26 05:09:19.302221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.373 [2024-07-26 05:09:19.302361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.373 [2024-07-26 05:09:19.302545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val= 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val= 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val= 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val=0xf 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val= 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val= 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val=decompress 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val= 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val=software 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@23 -- # accel_module=software 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val=32 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val=32 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val=1 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.631 05:09:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:00.631 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.631 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.632 05:09:19 -- accel/accel.sh@21 -- # val=Yes 00:12:00.632 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.632 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.632 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.632 05:09:19 -- accel/accel.sh@21 -- # val= 00:12:00.632 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.632 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.632 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:00.632 05:09:19 -- accel/accel.sh@21 -- # val= 00:12:00.632 05:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.632 05:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:00.632 05:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:02.576 05:09:21 -- accel/accel.sh@21 -- # val= 00:12:02.576 05:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # IFS=: 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # read -r var val 00:12:02.576 05:09:21 -- accel/accel.sh@21 -- # val= 00:12:02.576 05:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # IFS=: 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # read -r var val 00:12:02.576 05:09:21 -- accel/accel.sh@21 -- # val= 00:12:02.576 05:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # IFS=: 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # read -r var val 00:12:02.576 05:09:21 -- accel/accel.sh@21 -- # val= 00:12:02.576 05:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # IFS=: 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # read -r var val 00:12:02.576 05:09:21 -- accel/accel.sh@21 -- # val= 00:12:02.576 05:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # IFS=: 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # read -r var val 00:12:02.576 05:09:21 -- accel/accel.sh@21 -- # val= 00:12:02.576 05:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # IFS=: 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # read -r var val 00:12:02.576 05:09:21 -- accel/accel.sh@21 -- # val= 00:12:02.576 05:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # IFS=: 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # read -r var val 00:12:02.576 05:09:21 -- accel/accel.sh@21 -- # val= 00:12:02.576 05:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # IFS=: 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # read -r var val 00:12:02.576 05:09:21 -- accel/accel.sh@21 -- # val= 00:12:02.576 05:09:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # IFS=: 00:12:02.576 05:09:21 -- accel/accel.sh@20 -- # read -r var val 00:12:02.576 05:09:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:02.576 05:09:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:02.576 05:09:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:02.576 ************************************ 00:12:02.576 END TEST accel_decomp_full_mcore 00:12:02.576 ************************************ 00:12:02.576 00:12:02.576 real 0m5.183s 00:12:02.576 user 0m14.936s 00:12:02.576 sys 0m0.408s 00:12:02.576 05:09:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.576 05:09:21 -- common/autotest_common.sh@10 -- # set +x 00:12:02.576 05:09:21 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:02.576 05:09:21 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:02.576 05:09:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:02.576 05:09:21 -- common/autotest_common.sh@10 -- # set +x 00:12:02.576 ************************************ 00:12:02.576 START TEST accel_decomp_mthread 00:12:02.576 ************************************ 00:12:02.576 05:09:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:02.576 05:09:21 -- accel/accel.sh@16 -- # local accel_opc 00:12:02.576 05:09:21 -- accel/accel.sh@17 -- # local accel_module 00:12:02.577 05:09:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:02.577 05:09:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:02.577 05:09:21 -- accel/accel.sh@12 -- # build_accel_config 00:12:02.577 05:09:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:02.577 05:09:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.577 05:09:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.577 05:09:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:02.577 05:09:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:02.577 05:09:21 -- accel/accel.sh@41 -- # local IFS=, 00:12:02.577 05:09:21 -- accel/accel.sh@42 -- # jq -r . 00:12:02.577 [2024-07-26 05:09:21.567476] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:02.577 [2024-07-26 05:09:21.567625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64920 ] 00:12:02.836 [2024-07-26 05:09:21.735246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.836 [2024-07-26 05:09:21.940334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.371 05:09:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:05.371 00:12:05.371 SPDK Configuration: 00:12:05.371 Core mask: 0x1 00:12:05.371 00:12:05.371 Accel Perf Configuration: 00:12:05.371 Workload Type: decompress 00:12:05.371 Transfer size: 4096 bytes 00:12:05.371 Vector count 1 00:12:05.371 Module: software 00:12:05.371 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:05.371 Queue depth: 32 00:12:05.371 Allocate depth: 32 00:12:05.371 # threads/core: 2 00:12:05.371 Run time: 1 seconds 00:12:05.371 Verify: Yes 00:12:05.371 00:12:05.371 Running for 1 seconds... 00:12:05.371 00:12:05.371 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:05.371 ------------------------------------------------------------------------------------ 00:12:05.371 0,1 28736/s 52 MiB/s 0 0 00:12:05.371 0,0 28608/s 52 MiB/s 0 0 00:12:05.371 ==================================================================================== 00:12:05.371 Total 57344/s 224 MiB/s 0 0' 00:12:05.371 05:09:23 -- accel/accel.sh@20 -- # IFS=: 00:12:05.371 05:09:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:05.371 05:09:23 -- accel/accel.sh@20 -- # read -r var val 00:12:05.371 05:09:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:05.371 05:09:23 -- accel/accel.sh@12 -- # build_accel_config 00:12:05.371 05:09:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:05.371 05:09:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:05.371 05:09:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:05.371 05:09:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:05.371 05:09:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:05.371 05:09:23 -- accel/accel.sh@41 -- # local IFS=, 00:12:05.371 05:09:23 -- accel/accel.sh@42 -- # jq -r . 00:12:05.371 [2024-07-26 05:09:24.036919] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:05.371 [2024-07-26 05:09:24.037138] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64946 ] 00:12:05.371 [2024-07-26 05:09:24.209358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.371 [2024-07-26 05:09:24.404842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.630 05:09:24 -- accel/accel.sh@21 -- # val= 00:12:05.630 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.630 05:09:24 -- accel/accel.sh@21 -- # val= 00:12:05.630 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.630 05:09:24 -- accel/accel.sh@21 -- # val= 00:12:05.630 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.630 05:09:24 -- accel/accel.sh@21 -- # val=0x1 00:12:05.630 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.630 05:09:24 -- accel/accel.sh@21 -- # val= 00:12:05.630 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.630 05:09:24 -- accel/accel.sh@21 -- # val= 00:12:05.630 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.630 05:09:24 -- accel/accel.sh@21 -- # val=decompress 00:12:05.630 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.630 05:09:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.630 05:09:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:05.630 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.630 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.630 05:09:24 -- accel/accel.sh@21 -- # val= 00:12:05.630 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.631 05:09:24 -- accel/accel.sh@21 -- # val=software 00:12:05.631 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.631 05:09:24 -- accel/accel.sh@23 -- # accel_module=software 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.631 05:09:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:05.631 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.631 05:09:24 -- accel/accel.sh@21 -- # val=32 00:12:05.631 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.631 05:09:24 -- accel/accel.sh@21 -- # val=32 00:12:05.631 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.631 05:09:24 -- accel/accel.sh@21 -- # val=2 00:12:05.631 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.631 05:09:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:05.631 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.631 05:09:24 -- accel/accel.sh@21 -- # val=Yes 00:12:05.631 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.631 05:09:24 -- accel/accel.sh@21 -- # val= 00:12:05.631 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:05.631 05:09:24 -- accel/accel.sh@21 -- # val= 00:12:05.631 05:09:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # IFS=: 00:12:05.631 05:09:24 -- accel/accel.sh@20 -- # read -r var val 00:12:07.536 05:09:26 -- accel/accel.sh@21 -- # val= 00:12:07.536 05:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # IFS=: 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # read -r var val 00:12:07.536 05:09:26 -- accel/accel.sh@21 -- # val= 00:12:07.536 05:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # IFS=: 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # read -r var val 00:12:07.536 05:09:26 -- accel/accel.sh@21 -- # val= 00:12:07.536 05:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # IFS=: 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # read -r var val 00:12:07.536 05:09:26 -- accel/accel.sh@21 -- # val= 00:12:07.536 05:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # IFS=: 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # read -r var val 00:12:07.536 05:09:26 -- accel/accel.sh@21 -- # val= 00:12:07.536 05:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # IFS=: 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # read -r var val 00:12:07.536 05:09:26 -- accel/accel.sh@21 -- # val= 00:12:07.536 05:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # IFS=: 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # read -r var val 00:12:07.536 05:09:26 -- accel/accel.sh@21 -- # val= 00:12:07.536 05:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # IFS=: 00:12:07.536 05:09:26 -- accel/accel.sh@20 -- # read -r var val 00:12:07.536 05:09:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:07.536 05:09:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:07.536 05:09:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:07.536 00:12:07.536 real 0m4.862s 00:12:07.536 user 0m4.334s 00:12:07.536 sys 0m0.342s 00:12:07.536 05:09:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.536 ************************************ 00:12:07.536 END TEST accel_decomp_mthread 00:12:07.536 ************************************ 00:12:07.536 05:09:26 -- common/autotest_common.sh@10 -- # set +x 00:12:07.536 05:09:26 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:07.536 05:09:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:07.536 05:09:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:07.536 05:09:26 -- common/autotest_common.sh@10 -- # set +x 00:12:07.536 ************************************ 00:12:07.536 START TEST accel_deomp_full_mthread 00:12:07.536 ************************************ 00:12:07.536 05:09:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:07.536 05:09:26 -- accel/accel.sh@16 -- # local accel_opc 00:12:07.536 05:09:26 -- accel/accel.sh@17 -- # local accel_module 00:12:07.536 05:09:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:07.536 05:09:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:07.536 05:09:26 -- accel/accel.sh@12 -- # build_accel_config 00:12:07.536 05:09:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:07.536 05:09:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:07.536 05:09:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:07.536 05:09:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:07.536 05:09:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:07.536 05:09:26 -- accel/accel.sh@41 -- # local IFS=, 00:12:07.536 05:09:26 -- accel/accel.sh@42 -- # jq -r . 00:12:07.536 [2024-07-26 05:09:26.480659] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:07.536 [2024-07-26 05:09:26.480854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64987 ] 00:12:07.795 [2024-07-26 05:09:26.650287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.795 [2024-07-26 05:09:26.825308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.329 05:09:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:10.329 00:12:10.329 SPDK Configuration: 00:12:10.329 Core mask: 0x1 00:12:10.329 00:12:10.329 Accel Perf Configuration: 00:12:10.329 Workload Type: decompress 00:12:10.329 Transfer size: 111250 bytes 00:12:10.329 Vector count 1 00:12:10.329 Module: software 00:12:10.329 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:10.329 Queue depth: 32 00:12:10.329 Allocate depth: 32 00:12:10.329 # threads/core: 2 00:12:10.329 Run time: 1 seconds 00:12:10.329 Verify: Yes 00:12:10.329 00:12:10.329 Running for 1 seconds... 00:12:10.329 00:12:10.329 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:10.329 ------------------------------------------------------------------------------------ 00:12:10.329 0,1 2368/s 97 MiB/s 0 0 00:12:10.329 0,0 2336/s 96 MiB/s 0 0 00:12:10.329 ==================================================================================== 00:12:10.329 Total 4704/s 499 MiB/s 0 0' 00:12:10.329 05:09:28 -- accel/accel.sh@20 -- # IFS=: 00:12:10.329 05:09:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:10.329 05:09:28 -- accel/accel.sh@20 -- # read -r var val 00:12:10.329 05:09:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:10.329 05:09:28 -- accel/accel.sh@12 -- # build_accel_config 00:12:10.329 05:09:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:10.329 05:09:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:10.329 05:09:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:10.329 05:09:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:10.329 05:09:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:10.329 05:09:28 -- accel/accel.sh@41 -- # local IFS=, 00:12:10.329 05:09:28 -- accel/accel.sh@42 -- # jq -r . 00:12:10.329 [2024-07-26 05:09:28.905131] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:10.329 [2024-07-26 05:09:28.905327] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65019 ] 00:12:10.329 [2024-07-26 05:09:29.085298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.329 [2024-07-26 05:09:29.268020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val= 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val= 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val= 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val=0x1 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val= 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val= 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val=decompress 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val= 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val=software 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@23 -- # accel_module=software 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val=32 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val=32 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val=2 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val=Yes 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val= 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:10.589 05:09:29 -- accel/accel.sh@21 -- # val= 00:12:10.589 05:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # IFS=: 00:12:10.589 05:09:29 -- accel/accel.sh@20 -- # read -r var val 00:12:12.494 05:09:31 -- accel/accel.sh@21 -- # val= 00:12:12.494 05:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # IFS=: 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # read -r var val 00:12:12.494 05:09:31 -- accel/accel.sh@21 -- # val= 00:12:12.494 05:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # IFS=: 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # read -r var val 00:12:12.494 05:09:31 -- accel/accel.sh@21 -- # val= 00:12:12.494 05:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # IFS=: 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # read -r var val 00:12:12.494 05:09:31 -- accel/accel.sh@21 -- # val= 00:12:12.494 05:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # IFS=: 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # read -r var val 00:12:12.494 05:09:31 -- accel/accel.sh@21 -- # val= 00:12:12.494 05:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # IFS=: 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # read -r var val 00:12:12.494 05:09:31 -- accel/accel.sh@21 -- # val= 00:12:12.494 05:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # IFS=: 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # read -r var val 00:12:12.494 05:09:31 -- accel/accel.sh@21 -- # val= 00:12:12.494 05:09:31 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # IFS=: 00:12:12.494 05:09:31 -- accel/accel.sh@20 -- # read -r var val 00:12:12.494 05:09:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:12.494 05:09:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:12.494 05:09:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:12.494 00:12:12.494 real 0m4.873s 00:12:12.494 user 0m4.359s 00:12:12.494 sys 0m0.330s 00:12:12.494 05:09:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.494 ************************************ 00:12:12.494 05:09:31 -- common/autotest_common.sh@10 -- # set +x 00:12:12.494 END TEST accel_deomp_full_mthread 00:12:12.494 ************************************ 00:12:12.494 05:09:31 -- accel/accel.sh@116 -- # [[ n == y ]] 00:12:12.494 05:09:31 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:12.494 05:09:31 -- accel/accel.sh@129 -- # build_accel_config 00:12:12.494 05:09:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:12.494 05:09:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:12.494 05:09:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:12.494 05:09:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.494 05:09:31 -- common/autotest_common.sh@10 -- # set +x 00:12:12.494 05:09:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.494 05:09:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:12.494 05:09:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:12.494 05:09:31 -- accel/accel.sh@41 -- # local IFS=, 00:12:12.494 05:09:31 -- accel/accel.sh@42 -- # jq -r . 00:12:12.494 ************************************ 00:12:12.494 START TEST accel_dif_functional_tests 00:12:12.494 ************************************ 00:12:12.494 05:09:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:12.494 [2024-07-26 05:09:31.426179] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:12.494 [2024-07-26 05:09:31.426345] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65066 ] 00:12:12.494 [2024-07-26 05:09:31.597888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:12.753 [2024-07-26 05:09:31.786540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.753 [2024-07-26 05:09:31.786656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.753 [2024-07-26 05:09:31.786672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.012 00:12:13.012 00:12:13.012 CUnit - A unit testing framework for C - Version 2.1-3 00:12:13.012 http://cunit.sourceforge.net/ 00:12:13.012 00:12:13.012 00:12:13.012 Suite: accel_dif 00:12:13.012 Test: verify: DIF generated, GUARD check ...passed 00:12:13.012 Test: verify: DIF generated, APPTAG check ...passed 00:12:13.012 Test: verify: DIF generated, REFTAG check ...passed 00:12:13.012 Test: verify: DIF not generated, GUARD check ...[2024-07-26 05:09:32.057313] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:13.012 passed 00:12:13.012 Test: verify: DIF not generated, APPTAG check ...[2024-07-26 05:09:32.057429] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:13.012 passed 00:12:13.012 Test: verify: DIF not generated, REFTAG check ...[2024-07-26 05:09:32.057498] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:13.012 [2024-07-26 05:09:32.057561] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:13.012 passed 00:12:13.012 Test: verify: APPTAG correct, APPTAG check ...[2024-07-26 05:09:32.057606] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:13.012 [2024-07-26 05:09:32.057671] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:13.012 passed 00:12:13.012 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:12:13.012 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-26 05:09:32.057791] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:13.012 passed 00:12:13.012 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:13.012 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:13.012 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:12:13.012 Test: generate copy: DIF generated, GUARD check ...[2024-07-26 05:09:32.058066] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:13.012 passed 00:12:13.012 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:13.012 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:13.012 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:13.012 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:13.012 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:13.012 Test: generate copy: iovecs-len validate ...[2024-07-26 05:09:32.058644] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:13.012 passed 00:12:13.012 Test: generate copy: buffer alignment validate ...passed 00:12:13.012 00:12:13.012 Run Summary: Type Total Ran Passed Failed Inactive 00:12:13.012 suites 1 1 n/a 0 0 00:12:13.012 tests 20 20 20 0 0 00:12:13.012 asserts 204 204 204 0 n/a 00:12:13.012 00:12:13.012 Elapsed time = 0.005 seconds 00:12:14.435 00:12:14.435 real 0m1.810s 00:12:14.435 user 0m3.399s 00:12:14.435 sys 0m0.238s 00:12:14.435 05:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.435 05:09:33 -- common/autotest_common.sh@10 -- # set +x 00:12:14.435 ************************************ 00:12:14.435 END TEST accel_dif_functional_tests 00:12:14.435 ************************************ 00:12:14.435 00:12:14.435 real 1m45.177s 00:12:14.435 user 1m55.498s 00:12:14.435 sys 0m8.693s 00:12:14.435 05:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.435 05:09:33 -- common/autotest_common.sh@10 -- # set +x 00:12:14.435 ************************************ 00:12:14.435 END TEST accel 00:12:14.435 ************************************ 00:12:14.435 05:09:33 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:14.435 05:09:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:14.435 05:09:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:14.435 05:09:33 -- common/autotest_common.sh@10 -- # set +x 00:12:14.435 ************************************ 00:12:14.435 START TEST accel_rpc 00:12:14.435 ************************************ 00:12:14.435 05:09:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:14.435 * Looking for test storage... 00:12:14.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:14.435 05:09:33 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:14.435 05:09:33 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=65146 00:12:14.435 05:09:33 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:14.436 05:09:33 -- accel/accel_rpc.sh@15 -- # waitforlisten 65146 00:12:14.436 05:09:33 -- common/autotest_common.sh@819 -- # '[' -z 65146 ']' 00:12:14.436 05:09:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.436 05:09:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:14.436 05:09:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.436 05:09:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:14.436 05:09:33 -- common/autotest_common.sh@10 -- # set +x 00:12:14.436 [2024-07-26 05:09:33.392488] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:14.436 [2024-07-26 05:09:33.392640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65146 ] 00:12:14.708 [2024-07-26 05:09:33.551599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.708 [2024-07-26 05:09:33.728974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:14.708 [2024-07-26 05:09:33.729222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.276 05:09:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:15.276 05:09:34 -- common/autotest_common.sh@852 -- # return 0 00:12:15.276 05:09:34 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:15.276 05:09:34 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:15.276 05:09:34 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:15.276 05:09:34 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:15.276 05:09:34 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:15.276 05:09:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:15.276 05:09:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:15.276 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:12:15.276 ************************************ 00:12:15.276 START TEST accel_assign_opcode 00:12:15.276 ************************************ 00:12:15.276 05:09:34 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:12:15.276 05:09:34 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:15.276 05:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.276 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:12:15.276 [2024-07-26 05:09:34.269942] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:15.276 05:09:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:15.276 05:09:34 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:15.276 05:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.276 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:12:15.276 [2024-07-26 05:09:34.277900] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:15.276 05:09:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:15.276 05:09:34 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:15.276 05:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.276 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:12:15.843 05:09:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:15.843 05:09:34 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:15.843 05:09:34 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:15.843 05:09:34 -- accel/accel_rpc.sh@42 -- # grep software 00:12:15.843 05:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.843 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:12:15.843 05:09:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:15.843 software 00:12:15.843 00:12:15.843 real 0m0.680s 00:12:15.843 user 0m0.015s 00:12:15.843 sys 0m0.011s 00:12:15.843 05:09:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.843 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:12:15.843 ************************************ 00:12:15.843 END TEST accel_assign_opcode 00:12:15.843 ************************************ 00:12:16.102 05:09:34 -- accel/accel_rpc.sh@55 -- # killprocess 65146 00:12:16.102 05:09:34 -- common/autotest_common.sh@926 -- # '[' -z 65146 ']' 00:12:16.102 05:09:34 -- common/autotest_common.sh@930 -- # kill -0 65146 00:12:16.102 05:09:34 -- common/autotest_common.sh@931 -- # uname 00:12:16.102 05:09:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:16.102 05:09:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65146 00:12:16.102 05:09:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:16.102 05:09:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:16.102 killing process with pid 65146 00:12:16.102 05:09:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65146' 00:12:16.102 05:09:35 -- common/autotest_common.sh@945 -- # kill 65146 00:12:16.102 05:09:35 -- common/autotest_common.sh@950 -- # wait 65146 00:12:18.005 00:12:18.005 real 0m3.769s 00:12:18.005 user 0m3.711s 00:12:18.005 sys 0m0.476s 00:12:18.005 05:09:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.005 05:09:37 -- common/autotest_common.sh@10 -- # set +x 00:12:18.005 ************************************ 00:12:18.005 END TEST accel_rpc 00:12:18.005 ************************************ 00:12:18.005 05:09:37 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:18.005 05:09:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:18.005 05:09:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:18.005 05:09:37 -- common/autotest_common.sh@10 -- # set +x 00:12:18.005 ************************************ 00:12:18.005 START TEST app_cmdline 00:12:18.005 ************************************ 00:12:18.005 05:09:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:18.263 * Looking for test storage... 00:12:18.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:18.263 05:09:37 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:18.263 05:09:37 -- app/cmdline.sh@17 -- # spdk_tgt_pid=65257 00:12:18.263 05:09:37 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:18.263 05:09:37 -- app/cmdline.sh@18 -- # waitforlisten 65257 00:12:18.263 05:09:37 -- common/autotest_common.sh@819 -- # '[' -z 65257 ']' 00:12:18.263 05:09:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.263 05:09:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:18.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.263 05:09:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.263 05:09:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:18.263 05:09:37 -- common/autotest_common.sh@10 -- # set +x 00:12:18.264 [2024-07-26 05:09:37.232789] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:18.264 [2024-07-26 05:09:37.233015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65257 ] 00:12:18.523 [2024-07-26 05:09:37.400188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.523 [2024-07-26 05:09:37.576029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:18.523 [2024-07-26 05:09:37.576333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.899 05:09:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:19.899 05:09:38 -- common/autotest_common.sh@852 -- # return 0 00:12:19.899 05:09:38 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:20.158 { 00:12:20.158 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:12:20.158 "fields": { 00:12:20.158 "major": 24, 00:12:20.158 "minor": 1, 00:12:20.158 "patch": 1, 00:12:20.158 "suffix": "-pre", 00:12:20.158 "commit": "dbef7efac" 00:12:20.158 } 00:12:20.158 } 00:12:20.158 05:09:39 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:20.158 05:09:39 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:20.158 05:09:39 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:20.158 05:09:39 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:20.158 05:09:39 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:20.158 05:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.158 05:09:39 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:20.158 05:09:39 -- app/cmdline.sh@26 -- # sort 00:12:20.158 05:09:39 -- common/autotest_common.sh@10 -- # set +x 00:12:20.158 05:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.158 05:09:39 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:20.158 05:09:39 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:20.159 05:09:39 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:20.159 05:09:39 -- common/autotest_common.sh@640 -- # local es=0 00:12:20.159 05:09:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:20.159 05:09:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.159 05:09:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:20.159 05:09:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.159 05:09:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:20.159 05:09:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.159 05:09:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:20.159 05:09:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.159 05:09:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:20.159 05:09:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:20.418 request: 00:12:20.418 { 00:12:20.418 "method": "env_dpdk_get_mem_stats", 00:12:20.418 "req_id": 1 00:12:20.418 } 00:12:20.418 Got JSON-RPC error response 00:12:20.418 response: 00:12:20.418 { 00:12:20.418 "code": -32601, 00:12:20.418 "message": "Method not found" 00:12:20.418 } 00:12:20.418 05:09:39 -- common/autotest_common.sh@643 -- # es=1 00:12:20.418 05:09:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:20.418 05:09:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:20.418 05:09:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:20.418 05:09:39 -- app/cmdline.sh@1 -- # killprocess 65257 00:12:20.418 05:09:39 -- common/autotest_common.sh@926 -- # '[' -z 65257 ']' 00:12:20.418 05:09:39 -- common/autotest_common.sh@930 -- # kill -0 65257 00:12:20.418 05:09:39 -- common/autotest_common.sh@931 -- # uname 00:12:20.418 05:09:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:20.418 05:09:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65257 00:12:20.418 05:09:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:20.418 05:09:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:20.418 killing process with pid 65257 00:12:20.418 05:09:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65257' 00:12:20.418 05:09:39 -- common/autotest_common.sh@945 -- # kill 65257 00:12:20.418 05:09:39 -- common/autotest_common.sh@950 -- # wait 65257 00:12:22.952 00:12:22.952 real 0m4.375s 00:12:22.952 user 0m4.993s 00:12:22.952 sys 0m0.565s 00:12:22.952 05:09:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.952 05:09:41 -- common/autotest_common.sh@10 -- # set +x 00:12:22.952 ************************************ 00:12:22.952 END TEST app_cmdline 00:12:22.952 ************************************ 00:12:22.952 05:09:41 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:22.952 05:09:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:22.952 05:09:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:22.952 05:09:41 -- common/autotest_common.sh@10 -- # set +x 00:12:22.952 ************************************ 00:12:22.952 START TEST version 00:12:22.952 ************************************ 00:12:22.952 05:09:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:22.952 * Looking for test storage... 00:12:22.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:22.952 05:09:41 -- app/version.sh@17 -- # get_header_version major 00:12:22.952 05:09:41 -- app/version.sh@14 -- # cut -f2 00:12:22.952 05:09:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.952 05:09:41 -- app/version.sh@14 -- # tr -d '"' 00:12:22.952 05:09:41 -- app/version.sh@17 -- # major=24 00:12:22.952 05:09:41 -- app/version.sh@18 -- # get_header_version minor 00:12:22.952 05:09:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.952 05:09:41 -- app/version.sh@14 -- # tr -d '"' 00:12:22.952 05:09:41 -- app/version.sh@14 -- # cut -f2 00:12:22.952 05:09:41 -- app/version.sh@18 -- # minor=1 00:12:22.952 05:09:41 -- app/version.sh@19 -- # get_header_version patch 00:12:22.952 05:09:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.952 05:09:41 -- app/version.sh@14 -- # cut -f2 00:12:22.952 05:09:41 -- app/version.sh@14 -- # tr -d '"' 00:12:22.952 05:09:41 -- app/version.sh@19 -- # patch=1 00:12:22.952 05:09:41 -- app/version.sh@20 -- # get_header_version suffix 00:12:22.952 05:09:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.952 05:09:41 -- app/version.sh@14 -- # cut -f2 00:12:22.952 05:09:41 -- app/version.sh@14 -- # tr -d '"' 00:12:22.952 05:09:41 -- app/version.sh@20 -- # suffix=-pre 00:12:22.952 05:09:41 -- app/version.sh@22 -- # version=24.1 00:12:22.952 05:09:41 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:22.952 05:09:41 -- app/version.sh@25 -- # version=24.1.1 00:12:22.952 05:09:41 -- app/version.sh@28 -- # version=24.1.1rc0 00:12:22.952 05:09:41 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:22.952 05:09:41 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:22.952 05:09:41 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:12:22.952 05:09:41 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:12:22.952 00:12:22.952 real 0m0.148s 00:12:22.952 user 0m0.078s 00:12:22.952 sys 0m0.106s 00:12:22.952 05:09:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.952 05:09:41 -- common/autotest_common.sh@10 -- # set +x 00:12:22.952 ************************************ 00:12:22.952 END TEST version 00:12:22.952 ************************************ 00:12:22.952 05:09:41 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:12:22.952 05:09:41 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:22.952 05:09:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:22.952 05:09:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:22.952 05:09:41 -- common/autotest_common.sh@10 -- # set +x 00:12:22.952 ************************************ 00:12:22.952 START TEST blockdev_general 00:12:22.952 ************************************ 00:12:22.952 05:09:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:22.952 * Looking for test storage... 00:12:22.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:22.952 05:09:41 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:22.952 05:09:41 -- bdev/nbd_common.sh@6 -- # set -e 00:12:22.952 05:09:41 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:22.952 05:09:41 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:22.952 05:09:41 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:22.952 05:09:41 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:22.952 05:09:41 -- bdev/blockdev.sh@18 -- # : 00:12:22.952 05:09:41 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:22.952 05:09:41 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:22.952 05:09:41 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:22.952 05:09:41 -- bdev/blockdev.sh@672 -- # uname -s 00:12:22.952 05:09:41 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:22.952 05:09:41 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:22.952 05:09:41 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:12:22.952 05:09:41 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:22.952 05:09:41 -- bdev/blockdev.sh@682 -- # dek= 00:12:22.952 05:09:41 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:22.952 05:09:41 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:22.952 05:09:41 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:22.952 05:09:41 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:12:22.952 05:09:41 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:12:22.952 05:09:41 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:22.952 05:09:41 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=65422 00:12:22.952 05:09:41 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:22.952 05:09:41 -- bdev/blockdev.sh@47 -- # waitforlisten 65422 00:12:22.952 05:09:41 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:22.952 05:09:41 -- common/autotest_common.sh@819 -- # '[' -z 65422 ']' 00:12:22.952 05:09:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.952 05:09:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:22.952 05:09:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.952 05:09:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:22.952 05:09:41 -- common/autotest_common.sh@10 -- # set +x 00:12:22.952 [2024-07-26 05:09:41.872578] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:22.952 [2024-07-26 05:09:41.872738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65422 ] 00:12:22.952 [2024-07-26 05:09:42.044230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.211 [2024-07-26 05:09:42.216821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:23.211 [2024-07-26 05:09:42.217102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.778 05:09:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:23.778 05:09:42 -- common/autotest_common.sh@852 -- # return 0 00:12:23.778 05:09:42 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:23.778 05:09:42 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:12:23.778 05:09:42 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:12:23.778 05:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.778 05:09:42 -- common/autotest_common.sh@10 -- # set +x 00:12:24.713 [2024-07-26 05:09:43.499883] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.713 [2024-07-26 05:09:43.499977] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.713 00:12:24.713 [2024-07-26 05:09:43.507849] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.713 [2024-07-26 05:09:43.507943] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.713 00:12:24.713 Malloc0 00:12:24.713 Malloc1 00:12:24.713 Malloc2 00:12:24.713 Malloc3 00:12:24.713 Malloc4 00:12:24.713 Malloc5 00:12:24.713 Malloc6 00:12:24.713 Malloc7 00:12:24.972 Malloc8 00:12:24.972 Malloc9 00:12:24.972 [2024-07-26 05:09:43.866642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:24.972 [2024-07-26 05:09:43.866732] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.972 [2024-07-26 05:09:43.866762] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:12:24.972 [2024-07-26 05:09:43.866776] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.972 [2024-07-26 05:09:43.869470] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.972 [2024-07-26 05:09:43.869528] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:24.972 TestPT 00:12:24.972 05:09:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.972 05:09:43 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:24.972 5000+0 records in 00:12:24.972 5000+0 records out 00:12:24.972 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0224204 s, 457 MB/s 00:12:24.972 05:09:43 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:24.972 05:09:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.972 05:09:43 -- common/autotest_common.sh@10 -- # set +x 00:12:24.972 AIO0 00:12:24.972 05:09:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.972 05:09:43 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:24.972 05:09:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.972 05:09:43 -- common/autotest_common.sh@10 -- # set +x 00:12:24.972 05:09:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.972 05:09:43 -- bdev/blockdev.sh@738 -- # cat 00:12:24.972 05:09:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:24.972 05:09:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.972 05:09:43 -- common/autotest_common.sh@10 -- # set +x 00:12:24.972 05:09:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.972 05:09:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:24.972 05:09:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.972 05:09:43 -- common/autotest_common.sh@10 -- # set +x 00:12:24.972 05:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.972 05:09:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:24.972 05:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.972 05:09:44 -- common/autotest_common.sh@10 -- # set +x 00:12:24.972 05:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.972 05:09:44 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:24.972 05:09:44 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:24.972 05:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.972 05:09:44 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:24.972 05:09:44 -- common/autotest_common.sh@10 -- # set +x 00:12:25.231 05:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.231 05:09:44 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:25.231 05:09:44 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:25.233 05:09:44 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "5473e72e-d8c1-4fe7-9e63-a8f903eb1731"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5473e72e-d8c1-4fe7-9e63-a8f903eb1731",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d30fe741-e2f0-53f6-b69f-a940b76820c8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d30fe741-e2f0-53f6-b69f-a940b76820c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "eed747de-69cd-5213-ac7c-9ac43d120948"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "eed747de-69cd-5213-ac7c-9ac43d120948",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "28a88ec2-9fa6-5cc3-a9fa-20aebe01dd9a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "28a88ec2-9fa6-5cc3-a9fa-20aebe01dd9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c34d5efb-2747-5f4b-9809-817060826f89"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c34d5efb-2747-5f4b-9809-817060826f89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "3ffa7bd1-b7a2-51fa-8426-8b0cc127c4ac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3ffa7bd1-b7a2-51fa-8426-8b0cc127c4ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "61d563fc-dd1c-51ed-92a2-af22a4e425aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "61d563fc-dd1c-51ed-92a2-af22a4e425aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "f48cf60e-fa3d-532d-89ab-9470cd0f3993"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f48cf60e-fa3d-532d-89ab-9470cd0f3993",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "ef427d45-90f7-5eb9-9371-637387d22b5a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef427d45-90f7-5eb9-9371-637387d22b5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "126f2ebd-5883-5905-aa38-fcb425847da9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "126f2ebd-5883-5905-aa38-fcb425847da9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "76a0998e-4d62-53ef-a1d7-342ffbe51e84"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "76a0998e-4d62-53ef-a1d7-342ffbe51e84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a823701c-3e9c-50d8-b101-0775f2f29432"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a823701c-3e9c-50d8-b101-0775f2f29432",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5c70fba3-4e6b-45fd-a10c-267337407493"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5c70fba3-4e6b-45fd-a10c-267337407493",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5c70fba3-4e6b-45fd-a10c-267337407493",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2c8a2d26-ac83-42a0-b17e-95d4f517d9fa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "4cb4fba0-11b8-4757-bece-584061fe9d29",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "d33fa50a-c35d-4078-a456-7a6277e42d52"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d33fa50a-c35d-4078-a456-7a6277e42d52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d33fa50a-c35d-4078-a456-7a6277e42d52",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "758ac65d-852e-46dd-83c9-a6ba5bfcc3dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "989ed421-b73b-41af-aecf-5b5518febf9e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a101b393-3da3-4f70-8f81-0e581f9aad6f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a101b393-3da3-4f70-8f81-0e581f9aad6f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a101b393-3da3-4f70-8f81-0e581f9aad6f",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6aafa61c-9374-42f0-93f1-24faee44b682",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b595e94c-90a8-47d9-badd-a8920d664bde",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "97311939-422a-49f0-86ad-a2655938e59a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "97311939-422a-49f0-86ad-a2655938e59a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:25.233 05:09:44 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:25.233 05:09:44 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:12:25.233 05:09:44 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:25.233 05:09:44 -- bdev/blockdev.sh@752 -- # killprocess 65422 00:12:25.233 05:09:44 -- common/autotest_common.sh@926 -- # '[' -z 65422 ']' 00:12:25.233 05:09:44 -- common/autotest_common.sh@930 -- # kill -0 65422 00:12:25.233 05:09:44 -- common/autotest_common.sh@931 -- # uname 00:12:25.233 05:09:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:25.233 05:09:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65422 00:12:25.233 killing process with pid 65422 00:12:25.233 05:09:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:25.233 05:09:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:25.233 05:09:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65422' 00:12:25.233 05:09:44 -- common/autotest_common.sh@945 -- # kill 65422 00:12:25.233 05:09:44 -- common/autotest_common.sh@950 -- # wait 65422 00:12:28.519 05:09:47 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:28.519 05:09:47 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:28.519 05:09:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:28.519 05:09:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:28.519 05:09:47 -- common/autotest_common.sh@10 -- # set +x 00:12:28.519 ************************************ 00:12:28.519 START TEST bdev_hello_world 00:12:28.519 ************************************ 00:12:28.519 05:09:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:28.519 [2024-07-26 05:09:47.115326] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:28.519 [2024-07-26 05:09:47.115492] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65507 ] 00:12:28.519 [2024-07-26 05:09:47.272775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.519 [2024-07-26 05:09:47.449206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.778 [2024-07-26 05:09:47.772965] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:28.778 [2024-07-26 05:09:47.773100] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:28.778 [2024-07-26 05:09:47.780924] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:28.778 [2024-07-26 05:09:47.781004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:28.778 [2024-07-26 05:09:47.788948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:28.778 [2024-07-26 05:09:47.789035] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:28.778 [2024-07-26 05:09:47.789054] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:29.036 [2024-07-26 05:09:47.967364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:29.036 [2024-07-26 05:09:47.967494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.036 [2024-07-26 05:09:47.967520] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:29.036 [2024-07-26 05:09:47.967540] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.036 [2024-07-26 05:09:47.970347] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.036 [2024-07-26 05:09:47.970406] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:29.296 [2024-07-26 05:09:48.232914] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:29.296 [2024-07-26 05:09:48.233029] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:29.296 [2024-07-26 05:09:48.233066] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:29.296 [2024-07-26 05:09:48.233126] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:29.296 [2024-07-26 05:09:48.233184] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:29.296 [2024-07-26 05:09:48.233204] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:29.296 [2024-07-26 05:09:48.233258] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:29.296 00:12:29.296 [2024-07-26 05:09:48.233284] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:31.213 00:12:31.213 real 0m3.023s 00:12:31.213 user 0m2.581s 00:12:31.213 sys 0m0.311s 00:12:31.213 05:09:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.213 ************************************ 00:12:31.213 END TEST bdev_hello_world 00:12:31.213 ************************************ 00:12:31.213 05:09:50 -- common/autotest_common.sh@10 -- # set +x 00:12:31.213 05:09:50 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:31.213 05:09:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:31.213 05:09:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:31.213 05:09:50 -- common/autotest_common.sh@10 -- # set +x 00:12:31.213 ************************************ 00:12:31.213 START TEST bdev_bounds 00:12:31.213 ************************************ 00:12:31.213 05:09:50 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:12:31.213 05:09:50 -- bdev/blockdev.sh@288 -- # bdevio_pid=65560 00:12:31.213 Process bdevio pid: 65560 00:12:31.213 05:09:50 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:31.213 05:09:50 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 65560' 00:12:31.213 05:09:50 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:31.213 05:09:50 -- bdev/blockdev.sh@291 -- # waitforlisten 65560 00:12:31.213 05:09:50 -- common/autotest_common.sh@819 -- # '[' -z 65560 ']' 00:12:31.213 05:09:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.213 05:09:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:31.213 05:09:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.213 05:09:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:31.213 05:09:50 -- common/autotest_common.sh@10 -- # set +x 00:12:31.213 [2024-07-26 05:09:50.208625] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:31.213 [2024-07-26 05:09:50.208786] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65560 ] 00:12:31.517 [2024-07-26 05:09:50.378814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.517 [2024-07-26 05:09:50.556187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.517 [2024-07-26 05:09:50.556322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.517 [2024-07-26 05:09:50.556342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.084 [2024-07-26 05:09:50.898051] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:32.084 [2024-07-26 05:09:50.898130] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:32.084 [2024-07-26 05:09:50.906013] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:32.084 [2024-07-26 05:09:50.906059] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:32.084 [2024-07-26 05:09:50.914038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:32.084 [2024-07-26 05:09:50.914078] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:32.084 [2024-07-26 05:09:50.914094] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:32.084 [2024-07-26 05:09:51.087497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:32.084 [2024-07-26 05:09:51.087593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.084 [2024-07-26 05:09:51.087626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:32.084 [2024-07-26 05:09:51.087640] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.084 [2024-07-26 05:09:51.090299] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.084 [2024-07-26 05:09:51.090352] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:33.017 05:09:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:33.017 05:09:51 -- common/autotest_common.sh@852 -- # return 0 00:12:33.017 05:09:51 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:33.017 I/O targets: 00:12:33.017 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:33.017 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:33.017 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:33.017 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:33.017 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:33.017 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:33.017 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:33.017 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:33.017 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:33.017 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:33.017 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:33.017 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:33.017 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:33.017 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:33.017 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:33.017 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:33.017 00:12:33.017 00:12:33.017 CUnit - A unit testing framework for C - Version 2.1-3 00:12:33.017 http://cunit.sourceforge.net/ 00:12:33.017 00:12:33.017 00:12:33.017 Suite: bdevio tests on: AIO0 00:12:33.017 Test: blockdev write read block ...passed 00:12:33.017 Test: blockdev write zeroes read block ...passed 00:12:33.017 Test: blockdev write zeroes read no split ...passed 00:12:33.017 Test: blockdev write zeroes read split ...passed 00:12:33.017 Test: blockdev write zeroes read split partial ...passed 00:12:33.017 Test: blockdev reset ...passed 00:12:33.017 Test: blockdev write read 8 blocks ...passed 00:12:33.017 Test: blockdev write read size > 128k ...passed 00:12:33.017 Test: blockdev write read invalid size ...passed 00:12:33.017 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.017 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.017 Test: blockdev write read max offset ...passed 00:12:33.017 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.017 Test: blockdev writev readv 8 blocks ...passed 00:12:33.017 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.017 Test: blockdev writev readv block ...passed 00:12:33.017 Test: blockdev writev readv size > 128k ...passed 00:12:33.017 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.017 Test: blockdev comparev and writev ...passed 00:12:33.017 Test: blockdev nvme passthru rw ...passed 00:12:33.017 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.017 Test: blockdev nvme admin passthru ...passed 00:12:33.017 Test: blockdev copy ...passed 00:12:33.017 Suite: bdevio tests on: raid1 00:12:33.017 Test: blockdev write read block ...passed 00:12:33.017 Test: blockdev write zeroes read block ...passed 00:12:33.017 Test: blockdev write zeroes read no split ...passed 00:12:33.017 Test: blockdev write zeroes read split ...passed 00:12:33.017 Test: blockdev write zeroes read split partial ...passed 00:12:33.017 Test: blockdev reset ...passed 00:12:33.017 Test: blockdev write read 8 blocks ...passed 00:12:33.275 Test: blockdev write read size > 128k ...passed 00:12:33.275 Test: blockdev write read invalid size ...passed 00:12:33.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.276 Test: blockdev write read max offset ...passed 00:12:33.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.276 Test: blockdev writev readv 8 blocks ...passed 00:12:33.276 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.276 Test: blockdev writev readv block ...passed 00:12:33.276 Test: blockdev writev readv size > 128k ...passed 00:12:33.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.276 Test: blockdev comparev and writev ...passed 00:12:33.276 Test: blockdev nvme passthru rw ...passed 00:12:33.276 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.276 Test: blockdev nvme admin passthru ...passed 00:12:33.276 Test: blockdev copy ...passed 00:12:33.276 Suite: bdevio tests on: concat0 00:12:33.276 Test: blockdev write read block ...passed 00:12:33.276 Test: blockdev write zeroes read block ...passed 00:12:33.276 Test: blockdev write zeroes read no split ...passed 00:12:33.276 Test: blockdev write zeroes read split ...passed 00:12:33.276 Test: blockdev write zeroes read split partial ...passed 00:12:33.276 Test: blockdev reset ...passed 00:12:33.276 Test: blockdev write read 8 blocks ...passed 00:12:33.276 Test: blockdev write read size > 128k ...passed 00:12:33.276 Test: blockdev write read invalid size ...passed 00:12:33.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.276 Test: blockdev write read max offset ...passed 00:12:33.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.276 Test: blockdev writev readv 8 blocks ...passed 00:12:33.276 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.276 Test: blockdev writev readv block ...passed 00:12:33.276 Test: blockdev writev readv size > 128k ...passed 00:12:33.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.276 Test: blockdev comparev and writev ...passed 00:12:33.276 Test: blockdev nvme passthru rw ...passed 00:12:33.276 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.276 Test: blockdev nvme admin passthru ...passed 00:12:33.276 Test: blockdev copy ...passed 00:12:33.276 Suite: bdevio tests on: raid0 00:12:33.276 Test: blockdev write read block ...passed 00:12:33.276 Test: blockdev write zeroes read block ...passed 00:12:33.276 Test: blockdev write zeroes read no split ...passed 00:12:33.276 Test: blockdev write zeroes read split ...passed 00:12:33.276 Test: blockdev write zeroes read split partial ...passed 00:12:33.276 Test: blockdev reset ...passed 00:12:33.276 Test: blockdev write read 8 blocks ...passed 00:12:33.276 Test: blockdev write read size > 128k ...passed 00:12:33.276 Test: blockdev write read invalid size ...passed 00:12:33.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.276 Test: blockdev write read max offset ...passed 00:12:33.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.276 Test: blockdev writev readv 8 blocks ...passed 00:12:33.276 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.276 Test: blockdev writev readv block ...passed 00:12:33.276 Test: blockdev writev readv size > 128k ...passed 00:12:33.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.276 Test: blockdev comparev and writev ...passed 00:12:33.276 Test: blockdev nvme passthru rw ...passed 00:12:33.276 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.276 Test: blockdev nvme admin passthru ...passed 00:12:33.276 Test: blockdev copy ...passed 00:12:33.276 Suite: bdevio tests on: TestPT 00:12:33.276 Test: blockdev write read block ...passed 00:12:33.276 Test: blockdev write zeroes read block ...passed 00:12:33.276 Test: blockdev write zeroes read no split ...passed 00:12:33.276 Test: blockdev write zeroes read split ...passed 00:12:33.276 Test: blockdev write zeroes read split partial ...passed 00:12:33.276 Test: blockdev reset ...passed 00:12:33.276 Test: blockdev write read 8 blocks ...passed 00:12:33.276 Test: blockdev write read size > 128k ...passed 00:12:33.276 Test: blockdev write read invalid size ...passed 00:12:33.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.276 Test: blockdev write read max offset ...passed 00:12:33.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.276 Test: blockdev writev readv 8 blocks ...passed 00:12:33.276 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.276 Test: blockdev writev readv block ...passed 00:12:33.276 Test: blockdev writev readv size > 128k ...passed 00:12:33.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.276 Test: blockdev comparev and writev ...passed 00:12:33.276 Test: blockdev nvme passthru rw ...passed 00:12:33.276 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.276 Test: blockdev nvme admin passthru ...passed 00:12:33.276 Test: blockdev copy ...passed 00:12:33.276 Suite: bdevio tests on: Malloc2p7 00:12:33.276 Test: blockdev write read block ...passed 00:12:33.276 Test: blockdev write zeroes read block ...passed 00:12:33.276 Test: blockdev write zeroes read no split ...passed 00:12:33.276 Test: blockdev write zeroes read split ...passed 00:12:33.535 Test: blockdev write zeroes read split partial ...passed 00:12:33.535 Test: blockdev reset ...passed 00:12:33.535 Test: blockdev write read 8 blocks ...passed 00:12:33.535 Test: blockdev write read size > 128k ...passed 00:12:33.535 Test: blockdev write read invalid size ...passed 00:12:33.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.535 Test: blockdev write read max offset ...passed 00:12:33.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.535 Test: blockdev writev readv 8 blocks ...passed 00:12:33.535 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.535 Test: blockdev writev readv block ...passed 00:12:33.535 Test: blockdev writev readv size > 128k ...passed 00:12:33.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.535 Test: blockdev comparev and writev ...passed 00:12:33.535 Test: blockdev nvme passthru rw ...passed 00:12:33.535 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.535 Test: blockdev nvme admin passthru ...passed 00:12:33.535 Test: blockdev copy ...passed 00:12:33.535 Suite: bdevio tests on: Malloc2p6 00:12:33.535 Test: blockdev write read block ...passed 00:12:33.535 Test: blockdev write zeroes read block ...passed 00:12:33.535 Test: blockdev write zeroes read no split ...passed 00:12:33.535 Test: blockdev write zeroes read split ...passed 00:12:33.535 Test: blockdev write zeroes read split partial ...passed 00:12:33.535 Test: blockdev reset ...passed 00:12:33.535 Test: blockdev write read 8 blocks ...passed 00:12:33.535 Test: blockdev write read size > 128k ...passed 00:12:33.535 Test: blockdev write read invalid size ...passed 00:12:33.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.535 Test: blockdev write read max offset ...passed 00:12:33.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.535 Test: blockdev writev readv 8 blocks ...passed 00:12:33.535 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.535 Test: blockdev writev readv block ...passed 00:12:33.535 Test: blockdev writev readv size > 128k ...passed 00:12:33.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.535 Test: blockdev comparev and writev ...passed 00:12:33.535 Test: blockdev nvme passthru rw ...passed 00:12:33.535 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.535 Test: blockdev nvme admin passthru ...passed 00:12:33.535 Test: blockdev copy ...passed 00:12:33.535 Suite: bdevio tests on: Malloc2p5 00:12:33.535 Test: blockdev write read block ...passed 00:12:33.535 Test: blockdev write zeroes read block ...passed 00:12:33.535 Test: blockdev write zeroes read no split ...passed 00:12:33.535 Test: blockdev write zeroes read split ...passed 00:12:33.535 Test: blockdev write zeroes read split partial ...passed 00:12:33.535 Test: blockdev reset ...passed 00:12:33.535 Test: blockdev write read 8 blocks ...passed 00:12:33.535 Test: blockdev write read size > 128k ...passed 00:12:33.535 Test: blockdev write read invalid size ...passed 00:12:33.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.536 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.536 Test: blockdev write read max offset ...passed 00:12:33.536 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.536 Test: blockdev writev readv 8 blocks ...passed 00:12:33.536 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.536 Test: blockdev writev readv block ...passed 00:12:33.536 Test: blockdev writev readv size > 128k ...passed 00:12:33.536 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.536 Test: blockdev comparev and writev ...passed 00:12:33.536 Test: blockdev nvme passthru rw ...passed 00:12:33.536 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.536 Test: blockdev nvme admin passthru ...passed 00:12:33.536 Test: blockdev copy ...passed 00:12:33.536 Suite: bdevio tests on: Malloc2p4 00:12:33.536 Test: blockdev write read block ...passed 00:12:33.536 Test: blockdev write zeroes read block ...passed 00:12:33.536 Test: blockdev write zeroes read no split ...passed 00:12:33.536 Test: blockdev write zeroes read split ...passed 00:12:33.536 Test: blockdev write zeroes read split partial ...passed 00:12:33.536 Test: blockdev reset ...passed 00:12:33.536 Test: blockdev write read 8 blocks ...passed 00:12:33.536 Test: blockdev write read size > 128k ...passed 00:12:33.536 Test: blockdev write read invalid size ...passed 00:12:33.536 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.536 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.536 Test: blockdev write read max offset ...passed 00:12:33.536 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.536 Test: blockdev writev readv 8 blocks ...passed 00:12:33.536 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.536 Test: blockdev writev readv block ...passed 00:12:33.536 Test: blockdev writev readv size > 128k ...passed 00:12:33.536 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.536 Test: blockdev comparev and writev ...passed 00:12:33.536 Test: blockdev nvme passthru rw ...passed 00:12:33.536 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.536 Test: blockdev nvme admin passthru ...passed 00:12:33.536 Test: blockdev copy ...passed 00:12:33.536 Suite: bdevio tests on: Malloc2p3 00:12:33.536 Test: blockdev write read block ...passed 00:12:33.536 Test: blockdev write zeroes read block ...passed 00:12:33.536 Test: blockdev write zeroes read no split ...passed 00:12:33.536 Test: blockdev write zeroes read split ...passed 00:12:33.536 Test: blockdev write zeroes read split partial ...passed 00:12:33.536 Test: blockdev reset ...passed 00:12:33.536 Test: blockdev write read 8 blocks ...passed 00:12:33.536 Test: blockdev write read size > 128k ...passed 00:12:33.536 Test: blockdev write read invalid size ...passed 00:12:33.536 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.536 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.536 Test: blockdev write read max offset ...passed 00:12:33.536 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.536 Test: blockdev writev readv 8 blocks ...passed 00:12:33.536 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.536 Test: blockdev writev readv block ...passed 00:12:33.536 Test: blockdev writev readv size > 128k ...passed 00:12:33.536 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.536 Test: blockdev comparev and writev ...passed 00:12:33.536 Test: blockdev nvme passthru rw ...passed 00:12:33.536 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.536 Test: blockdev nvme admin passthru ...passed 00:12:33.536 Test: blockdev copy ...passed 00:12:33.536 Suite: bdevio tests on: Malloc2p2 00:12:33.536 Test: blockdev write read block ...passed 00:12:33.536 Test: blockdev write zeroes read block ...passed 00:12:33.536 Test: blockdev write zeroes read no split ...passed 00:12:33.795 Test: blockdev write zeroes read split ...passed 00:12:33.795 Test: blockdev write zeroes read split partial ...passed 00:12:33.795 Test: blockdev reset ...passed 00:12:33.795 Test: blockdev write read 8 blocks ...passed 00:12:33.795 Test: blockdev write read size > 128k ...passed 00:12:33.795 Test: blockdev write read invalid size ...passed 00:12:33.795 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.795 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.795 Test: blockdev write read max offset ...passed 00:12:33.795 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.795 Test: blockdev writev readv 8 blocks ...passed 00:12:33.795 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.795 Test: blockdev writev readv block ...passed 00:12:33.795 Test: blockdev writev readv size > 128k ...passed 00:12:33.795 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.795 Test: blockdev comparev and writev ...passed 00:12:33.795 Test: blockdev nvme passthru rw ...passed 00:12:33.795 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.795 Test: blockdev nvme admin passthru ...passed 00:12:33.795 Test: blockdev copy ...passed 00:12:33.795 Suite: bdevio tests on: Malloc2p1 00:12:33.795 Test: blockdev write read block ...passed 00:12:33.795 Test: blockdev write zeroes read block ...passed 00:12:33.795 Test: blockdev write zeroes read no split ...passed 00:12:33.795 Test: blockdev write zeroes read split ...passed 00:12:33.795 Test: blockdev write zeroes read split partial ...passed 00:12:33.795 Test: blockdev reset ...passed 00:12:33.795 Test: blockdev write read 8 blocks ...passed 00:12:33.795 Test: blockdev write read size > 128k ...passed 00:12:33.795 Test: blockdev write read invalid size ...passed 00:12:33.795 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.795 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.795 Test: blockdev write read max offset ...passed 00:12:33.795 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.795 Test: blockdev writev readv 8 blocks ...passed 00:12:33.795 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.795 Test: blockdev writev readv block ...passed 00:12:33.795 Test: blockdev writev readv size > 128k ...passed 00:12:33.795 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.795 Test: blockdev comparev and writev ...passed 00:12:33.795 Test: blockdev nvme passthru rw ...passed 00:12:33.795 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.795 Test: blockdev nvme admin passthru ...passed 00:12:33.795 Test: blockdev copy ...passed 00:12:33.795 Suite: bdevio tests on: Malloc2p0 00:12:33.795 Test: blockdev write read block ...passed 00:12:33.795 Test: blockdev write zeroes read block ...passed 00:12:33.795 Test: blockdev write zeroes read no split ...passed 00:12:33.795 Test: blockdev write zeroes read split ...passed 00:12:33.795 Test: blockdev write zeroes read split partial ...passed 00:12:33.795 Test: blockdev reset ...passed 00:12:33.795 Test: blockdev write read 8 blocks ...passed 00:12:33.795 Test: blockdev write read size > 128k ...passed 00:12:33.795 Test: blockdev write read invalid size ...passed 00:12:33.795 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.795 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.795 Test: blockdev write read max offset ...passed 00:12:33.795 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.795 Test: blockdev writev readv 8 blocks ...passed 00:12:33.795 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.795 Test: blockdev writev readv block ...passed 00:12:33.795 Test: blockdev writev readv size > 128k ...passed 00:12:33.795 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.795 Test: blockdev comparev and writev ...passed 00:12:33.795 Test: blockdev nvme passthru rw ...passed 00:12:33.795 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.795 Test: blockdev nvme admin passthru ...passed 00:12:33.795 Test: blockdev copy ...passed 00:12:33.795 Suite: bdevio tests on: Malloc1p1 00:12:33.795 Test: blockdev write read block ...passed 00:12:33.795 Test: blockdev write zeroes read block ...passed 00:12:33.795 Test: blockdev write zeroes read no split ...passed 00:12:33.795 Test: blockdev write zeroes read split ...passed 00:12:33.795 Test: blockdev write zeroes read split partial ...passed 00:12:33.795 Test: blockdev reset ...passed 00:12:33.795 Test: blockdev write read 8 blocks ...passed 00:12:33.795 Test: blockdev write read size > 128k ...passed 00:12:33.795 Test: blockdev write read invalid size ...passed 00:12:33.795 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.795 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.795 Test: blockdev write read max offset ...passed 00:12:33.795 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.795 Test: blockdev writev readv 8 blocks ...passed 00:12:33.795 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.795 Test: blockdev writev readv block ...passed 00:12:33.795 Test: blockdev writev readv size > 128k ...passed 00:12:33.795 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.795 Test: blockdev comparev and writev ...passed 00:12:33.795 Test: blockdev nvme passthru rw ...passed 00:12:33.795 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.795 Test: blockdev nvme admin passthru ...passed 00:12:33.795 Test: blockdev copy ...passed 00:12:33.795 Suite: bdevio tests on: Malloc1p0 00:12:33.795 Test: blockdev write read block ...passed 00:12:33.795 Test: blockdev write zeroes read block ...passed 00:12:33.795 Test: blockdev write zeroes read no split ...passed 00:12:33.795 Test: blockdev write zeroes read split ...passed 00:12:34.054 Test: blockdev write zeroes read split partial ...passed 00:12:34.054 Test: blockdev reset ...passed 00:12:34.054 Test: blockdev write read 8 blocks ...passed 00:12:34.054 Test: blockdev write read size > 128k ...passed 00:12:34.054 Test: blockdev write read invalid size ...passed 00:12:34.054 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:34.054 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:34.054 Test: blockdev write read max offset ...passed 00:12:34.054 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:34.054 Test: blockdev writev readv 8 blocks ...passed 00:12:34.054 Test: blockdev writev readv 30 x 1block ...passed 00:12:34.054 Test: blockdev writev readv block ...passed 00:12:34.054 Test: blockdev writev readv size > 128k ...passed 00:12:34.054 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:34.054 Test: blockdev comparev and writev ...passed 00:12:34.054 Test: blockdev nvme passthru rw ...passed 00:12:34.054 Test: blockdev nvme passthru vendor specific ...passed 00:12:34.054 Test: blockdev nvme admin passthru ...passed 00:12:34.054 Test: blockdev copy ...passed 00:12:34.054 Suite: bdevio tests on: Malloc0 00:12:34.054 Test: blockdev write read block ...passed 00:12:34.054 Test: blockdev write zeroes read block ...passed 00:12:34.054 Test: blockdev write zeroes read no split ...passed 00:12:34.054 Test: blockdev write zeroes read split ...passed 00:12:34.054 Test: blockdev write zeroes read split partial ...passed 00:12:34.054 Test: blockdev reset ...passed 00:12:34.054 Test: blockdev write read 8 blocks ...passed 00:12:34.054 Test: blockdev write read size > 128k ...passed 00:12:34.054 Test: blockdev write read invalid size ...passed 00:12:34.054 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:34.054 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:34.054 Test: blockdev write read max offset ...passed 00:12:34.054 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:34.054 Test: blockdev writev readv 8 blocks ...passed 00:12:34.054 Test: blockdev writev readv 30 x 1block ...passed 00:12:34.054 Test: blockdev writev readv block ...passed 00:12:34.054 Test: blockdev writev readv size > 128k ...passed 00:12:34.054 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:34.054 Test: blockdev comparev and writev ...passed 00:12:34.054 Test: blockdev nvme passthru rw ...passed 00:12:34.054 Test: blockdev nvme passthru vendor specific ...passed 00:12:34.054 Test: blockdev nvme admin passthru ...passed 00:12:34.054 Test: blockdev copy ...passed 00:12:34.054 00:12:34.054 Run Summary: Type Total Ran Passed Failed Inactive 00:12:34.054 suites 16 16 n/a 0 0 00:12:34.054 tests 368 368 368 0 0 00:12:34.054 asserts 2224 2224 2224 0 n/a 00:12:34.054 00:12:34.054 Elapsed time = 2.840 seconds 00:12:34.054 0 00:12:34.054 05:09:53 -- bdev/blockdev.sh@293 -- # killprocess 65560 00:12:34.054 05:09:53 -- common/autotest_common.sh@926 -- # '[' -z 65560 ']' 00:12:34.054 05:09:53 -- common/autotest_common.sh@930 -- # kill -0 65560 00:12:34.054 05:09:53 -- common/autotest_common.sh@931 -- # uname 00:12:34.054 05:09:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:34.054 05:09:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65560 00:12:34.054 killing process with pid 65560 00:12:34.054 05:09:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:34.054 05:09:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:34.054 05:09:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65560' 00:12:34.054 05:09:53 -- common/autotest_common.sh@945 -- # kill 65560 00:12:34.054 05:09:53 -- common/autotest_common.sh@950 -- # wait 65560 00:12:35.958 05:09:54 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:35.958 00:12:35.958 real 0m4.617s 00:12:35.958 user 0m12.145s 00:12:35.958 sys 0m0.612s 00:12:35.958 05:09:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.958 05:09:54 -- common/autotest_common.sh@10 -- # set +x 00:12:35.958 ************************************ 00:12:35.958 END TEST bdev_bounds 00:12:35.958 ************************************ 00:12:35.958 05:09:54 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:35.958 05:09:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:35.958 05:09:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:35.958 05:09:54 -- common/autotest_common.sh@10 -- # set +x 00:12:35.958 ************************************ 00:12:35.958 START TEST bdev_nbd 00:12:35.958 ************************************ 00:12:35.958 05:09:54 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:35.958 05:09:54 -- bdev/blockdev.sh@298 -- # uname -s 00:12:35.958 05:09:54 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:35.958 05:09:54 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.958 05:09:54 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:35.958 05:09:54 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:35.958 05:09:54 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:35.958 05:09:54 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:12:35.958 05:09:54 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:35.958 05:09:54 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:35.958 05:09:54 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:35.958 05:09:54 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:12:35.958 05:09:54 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:35.958 05:09:54 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:35.958 05:09:54 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:35.958 05:09:54 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:35.958 05:09:54 -- bdev/blockdev.sh@316 -- # nbd_pid=65639 00:12:35.958 05:09:54 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:35.958 05:09:54 -- bdev/blockdev.sh@318 -- # waitforlisten 65639 /var/tmp/spdk-nbd.sock 00:12:35.958 05:09:54 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:35.958 05:09:54 -- common/autotest_common.sh@819 -- # '[' -z 65639 ']' 00:12:35.958 05:09:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:35.958 05:09:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:35.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:35.958 05:09:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:35.958 05:09:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:35.958 05:09:54 -- common/autotest_common.sh@10 -- # set +x 00:12:35.958 [2024-07-26 05:09:54.873530] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:35.958 [2024-07-26 05:09:54.873707] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.958 [2024-07-26 05:09:55.037078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.218 [2024-07-26 05:09:55.217581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.477 [2024-07-26 05:09:55.544399] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:36.477 [2024-07-26 05:09:55.544483] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:36.477 [2024-07-26 05:09:55.552368] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:36.477 [2024-07-26 05:09:55.552442] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:36.477 [2024-07-26 05:09:55.560387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:36.477 [2024-07-26 05:09:55.560452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:36.477 [2024-07-26 05:09:55.560468] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:36.736 [2024-07-26 05:09:55.735134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:36.736 [2024-07-26 05:09:55.735213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.736 [2024-07-26 05:09:55.735252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:36.736 [2024-07-26 05:09:55.735264] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.736 [2024-07-26 05:09:55.737866] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.736 [2024-07-26 05:09:55.737906] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:37.673 05:09:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:37.673 05:09:56 -- common/autotest_common.sh@852 -- # return 0 00:12:37.673 05:09:56 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@24 -- # local i 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:37.673 05:09:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:37.932 05:09:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:37.932 05:09:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:37.932 05:09:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:37.932 05:09:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:37.932 05:09:56 -- common/autotest_common.sh@857 -- # local i 00:12:37.932 05:09:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:37.932 05:09:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:37.932 05:09:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:37.932 05:09:56 -- common/autotest_common.sh@861 -- # break 00:12:37.932 05:09:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:37.932 05:09:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:37.932 05:09:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.932 1+0 records in 00:12:37.932 1+0 records out 00:12:37.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207812 s, 19.7 MB/s 00:12:37.932 05:09:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.932 05:09:56 -- common/autotest_common.sh@874 -- # size=4096 00:12:37.932 05:09:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.932 05:09:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:37.932 05:09:56 -- common/autotest_common.sh@877 -- # return 0 00:12:37.932 05:09:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.932 05:09:56 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:37.932 05:09:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:38.191 05:09:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:38.191 05:09:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:38.191 05:09:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:38.191 05:09:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:38.191 05:09:57 -- common/autotest_common.sh@857 -- # local i 00:12:38.191 05:09:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:38.191 05:09:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:38.191 05:09:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:38.191 05:09:57 -- common/autotest_common.sh@861 -- # break 00:12:38.191 05:09:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:38.191 05:09:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:38.191 05:09:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.191 1+0 records in 00:12:38.191 1+0 records out 00:12:38.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237956 s, 17.2 MB/s 00:12:38.191 05:09:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.191 05:09:57 -- common/autotest_common.sh@874 -- # size=4096 00:12:38.191 05:09:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.191 05:09:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:38.191 05:09:57 -- common/autotest_common.sh@877 -- # return 0 00:12:38.191 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.191 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.191 05:09:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:38.450 05:09:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:38.450 05:09:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:38.450 05:09:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:38.450 05:09:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:38.450 05:09:57 -- common/autotest_common.sh@857 -- # local i 00:12:38.450 05:09:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:38.450 05:09:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:38.450 05:09:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:38.450 05:09:57 -- common/autotest_common.sh@861 -- # break 00:12:38.450 05:09:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:38.450 05:09:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:38.450 05:09:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.450 1+0 records in 00:12:38.450 1+0 records out 00:12:38.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276462 s, 14.8 MB/s 00:12:38.450 05:09:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.450 05:09:57 -- common/autotest_common.sh@874 -- # size=4096 00:12:38.450 05:09:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.450 05:09:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:38.450 05:09:57 -- common/autotest_common.sh@877 -- # return 0 00:12:38.450 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.450 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.450 05:09:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:38.450 05:09:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:38.450 05:09:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:38.450 05:09:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:38.450 05:09:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:38.450 05:09:57 -- common/autotest_common.sh@857 -- # local i 00:12:38.450 05:09:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:38.450 05:09:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:38.450 05:09:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:38.450 05:09:57 -- common/autotest_common.sh@861 -- # break 00:12:38.450 05:09:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:38.450 05:09:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:38.450 05:09:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.709 1+0 records in 00:12:38.709 1+0 records out 00:12:38.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343115 s, 11.9 MB/s 00:12:38.709 05:09:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.709 05:09:57 -- common/autotest_common.sh@874 -- # size=4096 00:12:38.709 05:09:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.709 05:09:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:38.709 05:09:57 -- common/autotest_common.sh@877 -- # return 0 00:12:38.709 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.709 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.709 05:09:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:38.709 05:09:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:38.709 05:09:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:38.709 05:09:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:38.709 05:09:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:38.709 05:09:57 -- common/autotest_common.sh@857 -- # local i 00:12:38.709 05:09:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:38.709 05:09:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:38.709 05:09:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:38.709 05:09:57 -- common/autotest_common.sh@861 -- # break 00:12:38.709 05:09:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:38.709 05:09:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:38.709 05:09:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.709 1+0 records in 00:12:38.709 1+0 records out 00:12:38.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111665 s, 3.7 MB/s 00:12:38.709 05:09:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.709 05:09:57 -- common/autotest_common.sh@874 -- # size=4096 00:12:38.709 05:09:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.709 05:09:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:38.709 05:09:57 -- common/autotest_common.sh@877 -- # return 0 00:12:38.709 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.709 05:09:57 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.709 05:09:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:38.967 05:09:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:38.967 05:09:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:39.227 05:09:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:39.227 05:09:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:39.227 05:09:58 -- common/autotest_common.sh@857 -- # local i 00:12:39.227 05:09:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:39.227 05:09:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:39.227 05:09:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:39.227 05:09:58 -- common/autotest_common.sh@861 -- # break 00:12:39.227 05:09:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:39.227 05:09:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:39.227 05:09:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.227 1+0 records in 00:12:39.227 1+0 records out 00:12:39.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413706 s, 9.9 MB/s 00:12:39.227 05:09:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.227 05:09:58 -- common/autotest_common.sh@874 -- # size=4096 00:12:39.227 05:09:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.227 05:09:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:39.227 05:09:58 -- common/autotest_common.sh@877 -- # return 0 00:12:39.227 05:09:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.227 05:09:58 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:39.227 05:09:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:39.486 05:09:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:39.486 05:09:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:39.486 05:09:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:39.486 05:09:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:39.486 05:09:58 -- common/autotest_common.sh@857 -- # local i 00:12:39.486 05:09:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:39.486 05:09:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:39.486 05:09:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:39.486 05:09:58 -- common/autotest_common.sh@861 -- # break 00:12:39.486 05:09:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:39.486 05:09:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:39.486 05:09:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.486 1+0 records in 00:12:39.486 1+0 records out 00:12:39.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469264 s, 8.7 MB/s 00:12:39.486 05:09:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.486 05:09:58 -- common/autotest_common.sh@874 -- # size=4096 00:12:39.486 05:09:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.486 05:09:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:39.486 05:09:58 -- common/autotest_common.sh@877 -- # return 0 00:12:39.486 05:09:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.486 05:09:58 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:39.486 05:09:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:39.745 05:09:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:39.745 05:09:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:39.745 05:09:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:39.745 05:09:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:39.745 05:09:58 -- common/autotest_common.sh@857 -- # local i 00:12:39.745 05:09:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:39.745 05:09:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:39.745 05:09:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:39.745 05:09:58 -- common/autotest_common.sh@861 -- # break 00:12:39.745 05:09:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:39.745 05:09:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:39.745 05:09:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.745 1+0 records in 00:12:39.745 1+0 records out 00:12:39.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430704 s, 9.5 MB/s 00:12:39.745 05:09:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.745 05:09:58 -- common/autotest_common.sh@874 -- # size=4096 00:12:39.745 05:09:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.745 05:09:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:39.745 05:09:58 -- common/autotest_common.sh@877 -- # return 0 00:12:39.745 05:09:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.745 05:09:58 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:39.745 05:09:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:40.004 05:09:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:40.004 05:09:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:40.004 05:09:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:40.004 05:09:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:40.004 05:09:58 -- common/autotest_common.sh@857 -- # local i 00:12:40.004 05:09:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:40.004 05:09:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:40.004 05:09:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:40.004 05:09:58 -- common/autotest_common.sh@861 -- # break 00:12:40.004 05:09:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:40.004 05:09:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:40.004 05:09:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.004 1+0 records in 00:12:40.004 1+0 records out 00:12:40.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524761 s, 7.8 MB/s 00:12:40.004 05:09:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.004 05:09:58 -- common/autotest_common.sh@874 -- # size=4096 00:12:40.004 05:09:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.004 05:09:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:40.004 05:09:58 -- common/autotest_common.sh@877 -- # return 0 00:12:40.004 05:09:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.004 05:09:58 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.004 05:09:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:40.263 05:09:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:40.263 05:09:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:40.263 05:09:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:40.263 05:09:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:40.263 05:09:59 -- common/autotest_common.sh@857 -- # local i 00:12:40.263 05:09:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:40.263 05:09:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:40.263 05:09:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:40.263 05:09:59 -- common/autotest_common.sh@861 -- # break 00:12:40.263 05:09:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:40.263 05:09:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:40.263 05:09:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.263 1+0 records in 00:12:40.263 1+0 records out 00:12:40.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518071 s, 7.9 MB/s 00:12:40.263 05:09:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.263 05:09:59 -- common/autotest_common.sh@874 -- # size=4096 00:12:40.263 05:09:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.263 05:09:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:40.263 05:09:59 -- common/autotest_common.sh@877 -- # return 0 00:12:40.263 05:09:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.263 05:09:59 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.263 05:09:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:40.521 05:09:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:40.521 05:09:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:40.521 05:09:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:40.521 05:09:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:40.521 05:09:59 -- common/autotest_common.sh@857 -- # local i 00:12:40.521 05:09:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:40.521 05:09:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:40.521 05:09:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:40.521 05:09:59 -- common/autotest_common.sh@861 -- # break 00:12:40.521 05:09:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:40.521 05:09:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:40.521 05:09:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.521 1+0 records in 00:12:40.521 1+0 records out 00:12:40.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519661 s, 7.9 MB/s 00:12:40.521 05:09:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.521 05:09:59 -- common/autotest_common.sh@874 -- # size=4096 00:12:40.521 05:09:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.521 05:09:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:40.521 05:09:59 -- common/autotest_common.sh@877 -- # return 0 00:12:40.521 05:09:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.521 05:09:59 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.521 05:09:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:40.781 05:09:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:40.781 05:09:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:40.781 05:09:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:40.781 05:09:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:40.781 05:09:59 -- common/autotest_common.sh@857 -- # local i 00:12:40.781 05:09:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:40.781 05:09:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:40.781 05:09:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:40.781 05:09:59 -- common/autotest_common.sh@861 -- # break 00:12:40.781 05:09:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:40.781 05:09:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:40.781 05:09:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.781 1+0 records in 00:12:40.781 1+0 records out 00:12:40.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740456 s, 5.5 MB/s 00:12:40.781 05:09:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.781 05:09:59 -- common/autotest_common.sh@874 -- # size=4096 00:12:40.781 05:09:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.781 05:09:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:40.781 05:09:59 -- common/autotest_common.sh@877 -- # return 0 00:12:40.781 05:09:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.781 05:09:59 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.781 05:09:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:41.040 05:09:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:41.040 05:09:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:41.040 05:09:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:41.040 05:09:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:41.040 05:09:59 -- common/autotest_common.sh@857 -- # local i 00:12:41.040 05:09:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:41.040 05:09:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:41.040 05:09:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:41.040 05:09:59 -- common/autotest_common.sh@861 -- # break 00:12:41.040 05:09:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:41.040 05:09:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:41.040 05:09:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.040 1+0 records in 00:12:41.040 1+0 records out 00:12:41.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678236 s, 6.0 MB/s 00:12:41.040 05:09:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.041 05:09:59 -- common/autotest_common.sh@874 -- # size=4096 00:12:41.041 05:09:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.041 05:09:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:41.041 05:09:59 -- common/autotest_common.sh@877 -- # return 0 00:12:41.041 05:09:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.041 05:09:59 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.041 05:09:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:41.300 05:10:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:41.300 05:10:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:41.300 05:10:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:41.300 05:10:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:41.300 05:10:00 -- common/autotest_common.sh@857 -- # local i 00:12:41.300 05:10:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:41.300 05:10:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:41.300 05:10:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:41.300 05:10:00 -- common/autotest_common.sh@861 -- # break 00:12:41.300 05:10:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:41.300 05:10:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:41.300 05:10:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.300 1+0 records in 00:12:41.300 1+0 records out 00:12:41.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599853 s, 6.8 MB/s 00:12:41.300 05:10:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.300 05:10:00 -- common/autotest_common.sh@874 -- # size=4096 00:12:41.300 05:10:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.300 05:10:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:41.300 05:10:00 -- common/autotest_common.sh@877 -- # return 0 00:12:41.300 05:10:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.300 05:10:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.300 05:10:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:41.559 05:10:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:41.559 05:10:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:41.559 05:10:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:41.559 05:10:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:41.559 05:10:00 -- common/autotest_common.sh@857 -- # local i 00:12:41.559 05:10:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:41.559 05:10:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:41.559 05:10:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:41.559 05:10:00 -- common/autotest_common.sh@861 -- # break 00:12:41.559 05:10:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:41.559 05:10:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:41.559 05:10:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.559 1+0 records in 00:12:41.559 1+0 records out 00:12:41.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000843136 s, 4.9 MB/s 00:12:41.559 05:10:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.559 05:10:00 -- common/autotest_common.sh@874 -- # size=4096 00:12:41.559 05:10:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.559 05:10:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:41.559 05:10:00 -- common/autotest_common.sh@877 -- # return 0 00:12:41.559 05:10:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.559 05:10:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.559 05:10:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:41.818 05:10:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:41.818 05:10:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:41.818 05:10:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:41.818 05:10:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:41.818 05:10:00 -- common/autotest_common.sh@857 -- # local i 00:12:41.818 05:10:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:41.818 05:10:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:41.818 05:10:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:41.818 05:10:00 -- common/autotest_common.sh@861 -- # break 00:12:41.818 05:10:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:41.818 05:10:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:41.818 05:10:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.818 1+0 records in 00:12:41.818 1+0 records out 00:12:41.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117392 s, 3.5 MB/s 00:12:41.818 05:10:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.818 05:10:00 -- common/autotest_common.sh@874 -- # size=4096 00:12:41.818 05:10:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.818 05:10:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:41.818 05:10:00 -- common/autotest_common.sh@877 -- # return 0 00:12:41.818 05:10:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.818 05:10:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.818 05:10:00 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:42.077 05:10:00 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd0", 00:12:42.077 "bdev_name": "Malloc0" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd1", 00:12:42.077 "bdev_name": "Malloc1p0" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd2", 00:12:42.077 "bdev_name": "Malloc1p1" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd3", 00:12:42.077 "bdev_name": "Malloc2p0" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd4", 00:12:42.077 "bdev_name": "Malloc2p1" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd5", 00:12:42.077 "bdev_name": "Malloc2p2" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd6", 00:12:42.077 "bdev_name": "Malloc2p3" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd7", 00:12:42.077 "bdev_name": "Malloc2p4" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd8", 00:12:42.077 "bdev_name": "Malloc2p5" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd9", 00:12:42.077 "bdev_name": "Malloc2p6" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd10", 00:12:42.077 "bdev_name": "Malloc2p7" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd11", 00:12:42.077 "bdev_name": "TestPT" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd12", 00:12:42.077 "bdev_name": "raid0" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd13", 00:12:42.077 "bdev_name": "concat0" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd14", 00:12:42.077 "bdev_name": "raid1" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd15", 00:12:42.077 "bdev_name": "AIO0" 00:12:42.077 } 00:12:42.077 ]' 00:12:42.077 05:10:00 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:42.077 05:10:00 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd0", 00:12:42.077 "bdev_name": "Malloc0" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd1", 00:12:42.077 "bdev_name": "Malloc1p0" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd2", 00:12:42.077 "bdev_name": "Malloc1p1" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd3", 00:12:42.077 "bdev_name": "Malloc2p0" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd4", 00:12:42.077 "bdev_name": "Malloc2p1" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd5", 00:12:42.077 "bdev_name": "Malloc2p2" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd6", 00:12:42.077 "bdev_name": "Malloc2p3" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd7", 00:12:42.077 "bdev_name": "Malloc2p4" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd8", 00:12:42.077 "bdev_name": "Malloc2p5" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd9", 00:12:42.077 "bdev_name": "Malloc2p6" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd10", 00:12:42.077 "bdev_name": "Malloc2p7" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd11", 00:12:42.077 "bdev_name": "TestPT" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd12", 00:12:42.077 "bdev_name": "raid0" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd13", 00:12:42.077 "bdev_name": "concat0" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd14", 00:12:42.077 "bdev_name": "raid1" 00:12:42.077 }, 00:12:42.077 { 00:12:42.077 "nbd_device": "/dev/nbd15", 00:12:42.077 "bdev_name": "AIO0" 00:12:42.077 } 00:12:42.077 ]' 00:12:42.077 05:10:00 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:42.077 05:10:00 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:42.077 05:10:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.077 05:10:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:42.077 05:10:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.077 05:10:00 -- bdev/nbd_common.sh@51 -- # local i 00:12:42.077 05:10:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.077 05:10:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:42.336 05:10:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:42.336 05:10:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:42.336 05:10:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:42.336 05:10:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.336 05:10:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.336 05:10:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:42.336 05:10:01 -- bdev/nbd_common.sh@41 -- # break 00:12:42.336 05:10:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.336 05:10:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.336 05:10:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:42.594 05:10:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:42.594 05:10:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:42.594 05:10:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:42.594 05:10:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.594 05:10:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.594 05:10:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:42.594 05:10:01 -- bdev/nbd_common.sh@41 -- # break 00:12:42.594 05:10:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.594 05:10:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.594 05:10:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:42.853 05:10:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:42.853 05:10:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:42.853 05:10:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:42.853 05:10:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.853 05:10:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.853 05:10:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:42.853 05:10:01 -- bdev/nbd_common.sh@41 -- # break 00:12:42.853 05:10:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.853 05:10:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.853 05:10:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:43.112 05:10:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:43.112 05:10:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:43.112 05:10:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:43.112 05:10:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.112 05:10:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.112 05:10:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:43.112 05:10:01 -- bdev/nbd_common.sh@41 -- # break 00:12:43.112 05:10:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.112 05:10:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.112 05:10:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:43.112 05:10:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:43.112 05:10:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:43.112 05:10:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:43.112 05:10:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.112 05:10:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.112 05:10:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:43.112 05:10:02 -- bdev/nbd_common.sh@41 -- # break 00:12:43.112 05:10:02 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.112 05:10:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.112 05:10:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:43.370 05:10:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:43.370 05:10:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:43.370 05:10:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:43.370 05:10:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.370 05:10:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.370 05:10:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:43.370 05:10:02 -- bdev/nbd_common.sh@41 -- # break 00:12:43.370 05:10:02 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.370 05:10:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.370 05:10:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:43.629 05:10:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:43.629 05:10:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:43.629 05:10:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:43.629 05:10:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.629 05:10:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.629 05:10:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:43.629 05:10:02 -- bdev/nbd_common.sh@41 -- # break 00:12:43.629 05:10:02 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.629 05:10:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.629 05:10:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:43.888 05:10:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:43.888 05:10:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:43.888 05:10:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:43.888 05:10:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.888 05:10:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.888 05:10:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:43.888 05:10:02 -- bdev/nbd_common.sh@41 -- # break 00:12:43.888 05:10:02 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.888 05:10:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.888 05:10:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:44.147 05:10:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:44.147 05:10:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:44.147 05:10:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:44.147 05:10:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.147 05:10:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.147 05:10:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:44.147 05:10:03 -- bdev/nbd_common.sh@41 -- # break 00:12:44.147 05:10:03 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.147 05:10:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.147 05:10:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:44.405 05:10:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:44.405 05:10:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:44.405 05:10:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:44.405 05:10:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.405 05:10:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.406 05:10:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:44.406 05:10:03 -- bdev/nbd_common.sh@41 -- # break 00:12:44.406 05:10:03 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.406 05:10:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.406 05:10:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:44.663 05:10:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:44.663 05:10:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:44.663 05:10:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:44.663 05:10:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.663 05:10:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.663 05:10:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:44.663 05:10:03 -- bdev/nbd_common.sh@41 -- # break 00:12:44.663 05:10:03 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.663 05:10:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.663 05:10:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:44.921 05:10:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:44.921 05:10:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:44.921 05:10:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:44.921 05:10:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.921 05:10:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.921 05:10:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:44.921 05:10:03 -- bdev/nbd_common.sh@41 -- # break 00:12:44.921 05:10:03 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.921 05:10:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.921 05:10:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:45.180 05:10:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:45.180 05:10:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:45.180 05:10:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:45.180 05:10:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.180 05:10:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.180 05:10:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:45.180 05:10:04 -- bdev/nbd_common.sh@41 -- # break 00:12:45.180 05:10:04 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.180 05:10:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.180 05:10:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:45.438 05:10:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:45.438 05:10:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:45.438 05:10:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:45.438 05:10:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.438 05:10:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.438 05:10:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:45.438 05:10:04 -- bdev/nbd_common.sh@41 -- # break 00:12:45.438 05:10:04 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.438 05:10:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.438 05:10:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:45.697 05:10:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:45.697 05:10:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:45.697 05:10:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:45.697 05:10:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.697 05:10:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.697 05:10:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:45.697 05:10:04 -- bdev/nbd_common.sh@41 -- # break 00:12:45.697 05:10:04 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.697 05:10:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.697 05:10:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:45.955 05:10:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:45.955 05:10:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:45.955 05:10:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:45.955 05:10:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.955 05:10:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.955 05:10:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:45.955 05:10:04 -- bdev/nbd_common.sh@41 -- # break 00:12:45.955 05:10:04 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.955 05:10:04 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:45.955 05:10:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.955 05:10:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@65 -- # true 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@65 -- # count=0 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@122 -- # count=0 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@127 -- # return 0 00:12:46.214 05:10:05 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:46.214 05:10:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:46.215 05:10:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:46.215 05:10:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.215 05:10:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:46.215 05:10:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:46.215 05:10:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:46.215 05:10:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:46.215 05:10:05 -- bdev/nbd_common.sh@12 -- # local i 00:12:46.215 05:10:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:46.215 05:10:05 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:46.215 05:10:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:46.215 /dev/nbd0 00:12:46.473 05:10:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:46.473 05:10:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:46.473 05:10:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:46.473 05:10:05 -- common/autotest_common.sh@857 -- # local i 00:12:46.473 05:10:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:46.473 05:10:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:46.473 05:10:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:46.473 05:10:05 -- common/autotest_common.sh@861 -- # break 00:12:46.473 05:10:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:46.473 05:10:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:46.473 05:10:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.473 1+0 records in 00:12:46.473 1+0 records out 00:12:46.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324131 s, 12.6 MB/s 00:12:46.474 05:10:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.474 05:10:05 -- common/autotest_common.sh@874 -- # size=4096 00:12:46.474 05:10:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.474 05:10:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:46.474 05:10:05 -- common/autotest_common.sh@877 -- # return 0 00:12:46.474 05:10:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.474 05:10:05 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:46.474 05:10:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:46.733 /dev/nbd1 00:12:46.733 05:10:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:46.733 05:10:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:46.733 05:10:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:46.733 05:10:05 -- common/autotest_common.sh@857 -- # local i 00:12:46.733 05:10:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:46.733 05:10:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:46.733 05:10:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:46.733 05:10:05 -- common/autotest_common.sh@861 -- # break 00:12:46.733 05:10:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:46.733 05:10:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:46.733 05:10:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.733 1+0 records in 00:12:46.733 1+0 records out 00:12:46.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262545 s, 15.6 MB/s 00:12:46.733 05:10:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.733 05:10:05 -- common/autotest_common.sh@874 -- # size=4096 00:12:46.733 05:10:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.733 05:10:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:46.733 05:10:05 -- common/autotest_common.sh@877 -- # return 0 00:12:46.733 05:10:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.733 05:10:05 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:46.733 05:10:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:46.733 /dev/nbd10 00:12:46.992 05:10:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:46.992 05:10:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:46.992 05:10:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:46.992 05:10:05 -- common/autotest_common.sh@857 -- # local i 00:12:46.992 05:10:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:46.992 05:10:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:46.992 05:10:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:46.992 05:10:05 -- common/autotest_common.sh@861 -- # break 00:12:46.992 05:10:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:46.992 05:10:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:46.992 05:10:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.992 1+0 records in 00:12:46.992 1+0 records out 00:12:46.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307304 s, 13.3 MB/s 00:12:46.992 05:10:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.992 05:10:05 -- common/autotest_common.sh@874 -- # size=4096 00:12:46.992 05:10:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.992 05:10:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:46.992 05:10:05 -- common/autotest_common.sh@877 -- # return 0 00:12:46.992 05:10:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.992 05:10:05 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:46.992 05:10:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:46.992 /dev/nbd11 00:12:46.992 05:10:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:46.992 05:10:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:46.992 05:10:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:46.992 05:10:06 -- common/autotest_common.sh@857 -- # local i 00:12:46.992 05:10:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:46.992 05:10:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:46.992 05:10:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:46.992 05:10:06 -- common/autotest_common.sh@861 -- # break 00:12:46.992 05:10:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:46.992 05:10:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:46.992 05:10:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.992 1+0 records in 00:12:46.992 1+0 records out 00:12:46.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296748 s, 13.8 MB/s 00:12:46.992 05:10:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.992 05:10:06 -- common/autotest_common.sh@874 -- # size=4096 00:12:46.992 05:10:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.279 05:10:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:47.279 05:10:06 -- common/autotest_common.sh@877 -- # return 0 00:12:47.279 05:10:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.279 05:10:06 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:47.279 05:10:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:47.279 /dev/nbd12 00:12:47.279 05:10:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:47.279 05:10:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:47.279 05:10:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:47.279 05:10:06 -- common/autotest_common.sh@857 -- # local i 00:12:47.279 05:10:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:47.279 05:10:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:47.279 05:10:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:47.279 05:10:06 -- common/autotest_common.sh@861 -- # break 00:12:47.279 05:10:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:47.279 05:10:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:47.279 05:10:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.279 1+0 records in 00:12:47.279 1+0 records out 00:12:47.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422534 s, 9.7 MB/s 00:12:47.279 05:10:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.279 05:10:06 -- common/autotest_common.sh@874 -- # size=4096 00:12:47.279 05:10:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.279 05:10:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:47.279 05:10:06 -- common/autotest_common.sh@877 -- # return 0 00:12:47.279 05:10:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.279 05:10:06 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:47.279 05:10:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:47.538 /dev/nbd13 00:12:47.538 05:10:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:47.538 05:10:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:47.538 05:10:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:47.538 05:10:06 -- common/autotest_common.sh@857 -- # local i 00:12:47.538 05:10:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:47.538 05:10:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:47.538 05:10:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:47.538 05:10:06 -- common/autotest_common.sh@861 -- # break 00:12:47.538 05:10:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:47.538 05:10:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:47.538 05:10:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.538 1+0 records in 00:12:47.538 1+0 records out 00:12:47.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400419 s, 10.2 MB/s 00:12:47.538 05:10:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.538 05:10:06 -- common/autotest_common.sh@874 -- # size=4096 00:12:47.538 05:10:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.538 05:10:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:47.538 05:10:06 -- common/autotest_common.sh@877 -- # return 0 00:12:47.538 05:10:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.538 05:10:06 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:47.538 05:10:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:47.798 /dev/nbd14 00:12:47.798 05:10:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:47.798 05:10:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:47.798 05:10:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:47.798 05:10:06 -- common/autotest_common.sh@857 -- # local i 00:12:47.798 05:10:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:47.798 05:10:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:47.798 05:10:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:47.798 05:10:06 -- common/autotest_common.sh@861 -- # break 00:12:47.798 05:10:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:47.798 05:10:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:47.798 05:10:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.798 1+0 records in 00:12:47.798 1+0 records out 00:12:47.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375808 s, 10.9 MB/s 00:12:47.798 05:10:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.798 05:10:06 -- common/autotest_common.sh@874 -- # size=4096 00:12:47.798 05:10:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.798 05:10:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:47.798 05:10:06 -- common/autotest_common.sh@877 -- # return 0 00:12:47.798 05:10:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.798 05:10:06 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:47.798 05:10:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:48.056 /dev/nbd15 00:12:48.056 05:10:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:48.056 05:10:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:48.056 05:10:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:48.056 05:10:07 -- common/autotest_common.sh@857 -- # local i 00:12:48.056 05:10:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.056 05:10:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.056 05:10:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:48.056 05:10:07 -- common/autotest_common.sh@861 -- # break 00:12:48.056 05:10:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.056 05:10:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.057 05:10:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.057 1+0 records in 00:12:48.057 1+0 records out 00:12:48.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400184 s, 10.2 MB/s 00:12:48.057 05:10:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.057 05:10:07 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.057 05:10:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.057 05:10:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.057 05:10:07 -- common/autotest_common.sh@877 -- # return 0 00:12:48.057 05:10:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.057 05:10:07 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.057 05:10:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:48.315 /dev/nbd2 00:12:48.315 05:10:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:48.315 05:10:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:48.315 05:10:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:48.315 05:10:07 -- common/autotest_common.sh@857 -- # local i 00:12:48.315 05:10:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.315 05:10:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.315 05:10:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:48.315 05:10:07 -- common/autotest_common.sh@861 -- # break 00:12:48.315 05:10:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.315 05:10:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.315 05:10:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.315 1+0 records in 00:12:48.315 1+0 records out 00:12:48.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580877 s, 7.1 MB/s 00:12:48.315 05:10:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.315 05:10:07 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.315 05:10:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.315 05:10:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.315 05:10:07 -- common/autotest_common.sh@877 -- # return 0 00:12:48.315 05:10:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.315 05:10:07 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.315 05:10:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:48.573 /dev/nbd3 00:12:48.573 05:10:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:48.573 05:10:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:48.573 05:10:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:48.573 05:10:07 -- common/autotest_common.sh@857 -- # local i 00:12:48.573 05:10:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.573 05:10:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.573 05:10:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:48.573 05:10:07 -- common/autotest_common.sh@861 -- # break 00:12:48.573 05:10:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.573 05:10:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.573 05:10:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.573 1+0 records in 00:12:48.573 1+0 records out 00:12:48.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005702 s, 7.2 MB/s 00:12:48.573 05:10:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.573 05:10:07 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.573 05:10:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.573 05:10:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.573 05:10:07 -- common/autotest_common.sh@877 -- # return 0 00:12:48.573 05:10:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.573 05:10:07 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.573 05:10:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:48.831 /dev/nbd4 00:12:48.831 05:10:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:48.832 05:10:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:48.832 05:10:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:48.832 05:10:07 -- common/autotest_common.sh@857 -- # local i 00:12:48.832 05:10:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:48.832 05:10:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:48.832 05:10:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:48.832 05:10:07 -- common/autotest_common.sh@861 -- # break 00:12:48.832 05:10:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:48.832 05:10:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:48.832 05:10:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.832 1+0 records in 00:12:48.832 1+0 records out 00:12:48.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807862 s, 5.1 MB/s 00:12:48.832 05:10:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.832 05:10:07 -- common/autotest_common.sh@874 -- # size=4096 00:12:48.832 05:10:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.832 05:10:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:48.832 05:10:07 -- common/autotest_common.sh@877 -- # return 0 00:12:48.832 05:10:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.832 05:10:07 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.832 05:10:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:49.090 /dev/nbd5 00:12:49.090 05:10:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:49.090 05:10:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:49.090 05:10:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:49.090 05:10:08 -- common/autotest_common.sh@857 -- # local i 00:12:49.090 05:10:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.090 05:10:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.090 05:10:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:49.090 05:10:08 -- common/autotest_common.sh@861 -- # break 00:12:49.090 05:10:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.090 05:10:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.090 05:10:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.090 1+0 records in 00:12:49.090 1+0 records out 00:12:49.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686239 s, 6.0 MB/s 00:12:49.090 05:10:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.090 05:10:08 -- common/autotest_common.sh@874 -- # size=4096 00:12:49.090 05:10:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.090 05:10:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:49.090 05:10:08 -- common/autotest_common.sh@877 -- # return 0 00:12:49.090 05:10:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.090 05:10:08 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:49.090 05:10:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:49.348 /dev/nbd6 00:12:49.348 05:10:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:49.348 05:10:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:49.348 05:10:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:49.348 05:10:08 -- common/autotest_common.sh@857 -- # local i 00:12:49.348 05:10:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.348 05:10:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.348 05:10:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:49.348 05:10:08 -- common/autotest_common.sh@861 -- # break 00:12:49.348 05:10:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.348 05:10:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.348 05:10:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.348 1+0 records in 00:12:49.348 1+0 records out 00:12:49.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545485 s, 7.5 MB/s 00:12:49.348 05:10:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.348 05:10:08 -- common/autotest_common.sh@874 -- # size=4096 00:12:49.348 05:10:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.606 05:10:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:49.606 05:10:08 -- common/autotest_common.sh@877 -- # return 0 00:12:49.606 05:10:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.606 05:10:08 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:49.606 05:10:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:49.606 /dev/nbd7 00:12:49.606 05:10:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:49.606 05:10:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:49.606 05:10:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:49.606 05:10:08 -- common/autotest_common.sh@857 -- # local i 00:12:49.606 05:10:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.606 05:10:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.606 05:10:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:49.606 05:10:08 -- common/autotest_common.sh@861 -- # break 00:12:49.606 05:10:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.606 05:10:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.606 05:10:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.606 1+0 records in 00:12:49.606 1+0 records out 00:12:49.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000996239 s, 4.1 MB/s 00:12:49.606 05:10:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.606 05:10:08 -- common/autotest_common.sh@874 -- # size=4096 00:12:49.606 05:10:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.606 05:10:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:49.606 05:10:08 -- common/autotest_common.sh@877 -- # return 0 00:12:49.606 05:10:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.606 05:10:08 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:49.607 05:10:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:49.865 /dev/nbd8 00:12:49.865 05:10:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:49.865 05:10:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:49.865 05:10:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:49.865 05:10:08 -- common/autotest_common.sh@857 -- # local i 00:12:49.865 05:10:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:49.865 05:10:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:49.865 05:10:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:49.865 05:10:08 -- common/autotest_common.sh@861 -- # break 00:12:49.865 05:10:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:49.865 05:10:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:49.865 05:10:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.865 1+0 records in 00:12:49.865 1+0 records out 00:12:49.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000711347 s, 5.8 MB/s 00:12:49.865 05:10:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.123 05:10:08 -- common/autotest_common.sh@874 -- # size=4096 00:12:50.123 05:10:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.123 05:10:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:50.123 05:10:08 -- common/autotest_common.sh@877 -- # return 0 00:12:50.123 05:10:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.123 05:10:08 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.123 05:10:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:50.123 /dev/nbd9 00:12:50.382 05:10:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:50.382 05:10:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:50.382 05:10:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:50.382 05:10:09 -- common/autotest_common.sh@857 -- # local i 00:12:50.382 05:10:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:50.382 05:10:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:50.382 05:10:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:50.382 05:10:09 -- common/autotest_common.sh@861 -- # break 00:12:50.382 05:10:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:50.382 05:10:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:50.382 05:10:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.382 1+0 records in 00:12:50.382 1+0 records out 00:12:50.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134621 s, 3.0 MB/s 00:12:50.382 05:10:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.382 05:10:09 -- common/autotest_common.sh@874 -- # size=4096 00:12:50.382 05:10:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.382 05:10:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:50.382 05:10:09 -- common/autotest_common.sh@877 -- # return 0 00:12:50.382 05:10:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.382 05:10:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.382 05:10:09 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:50.382 05:10:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.382 05:10:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd0", 00:12:50.641 "bdev_name": "Malloc0" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd1", 00:12:50.641 "bdev_name": "Malloc1p0" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd10", 00:12:50.641 "bdev_name": "Malloc1p1" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd11", 00:12:50.641 "bdev_name": "Malloc2p0" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd12", 00:12:50.641 "bdev_name": "Malloc2p1" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd13", 00:12:50.641 "bdev_name": "Malloc2p2" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd14", 00:12:50.641 "bdev_name": "Malloc2p3" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd15", 00:12:50.641 "bdev_name": "Malloc2p4" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd2", 00:12:50.641 "bdev_name": "Malloc2p5" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd3", 00:12:50.641 "bdev_name": "Malloc2p6" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd4", 00:12:50.641 "bdev_name": "Malloc2p7" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd5", 00:12:50.641 "bdev_name": "TestPT" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd6", 00:12:50.641 "bdev_name": "raid0" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd7", 00:12:50.641 "bdev_name": "concat0" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd8", 00:12:50.641 "bdev_name": "raid1" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd9", 00:12:50.641 "bdev_name": "AIO0" 00:12:50.641 } 00:12:50.641 ]' 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd0", 00:12:50.641 "bdev_name": "Malloc0" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd1", 00:12:50.641 "bdev_name": "Malloc1p0" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd10", 00:12:50.641 "bdev_name": "Malloc1p1" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd11", 00:12:50.641 "bdev_name": "Malloc2p0" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd12", 00:12:50.641 "bdev_name": "Malloc2p1" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd13", 00:12:50.641 "bdev_name": "Malloc2p2" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd14", 00:12:50.641 "bdev_name": "Malloc2p3" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd15", 00:12:50.641 "bdev_name": "Malloc2p4" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd2", 00:12:50.641 "bdev_name": "Malloc2p5" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd3", 00:12:50.641 "bdev_name": "Malloc2p6" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd4", 00:12:50.641 "bdev_name": "Malloc2p7" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd5", 00:12:50.641 "bdev_name": "TestPT" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd6", 00:12:50.641 "bdev_name": "raid0" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd7", 00:12:50.641 "bdev_name": "concat0" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd8", 00:12:50.641 "bdev_name": "raid1" 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "nbd_device": "/dev/nbd9", 00:12:50.641 "bdev_name": "AIO0" 00:12:50.641 } 00:12:50.641 ]' 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:50.641 /dev/nbd1 00:12:50.641 /dev/nbd10 00:12:50.641 /dev/nbd11 00:12:50.641 /dev/nbd12 00:12:50.641 /dev/nbd13 00:12:50.641 /dev/nbd14 00:12:50.641 /dev/nbd15 00:12:50.641 /dev/nbd2 00:12:50.641 /dev/nbd3 00:12:50.641 /dev/nbd4 00:12:50.641 /dev/nbd5 00:12:50.641 /dev/nbd6 00:12:50.641 /dev/nbd7 00:12:50.641 /dev/nbd8 00:12:50.641 /dev/nbd9' 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:50.641 /dev/nbd1 00:12:50.641 /dev/nbd10 00:12:50.641 /dev/nbd11 00:12:50.641 /dev/nbd12 00:12:50.641 /dev/nbd13 00:12:50.641 /dev/nbd14 00:12:50.641 /dev/nbd15 00:12:50.641 /dev/nbd2 00:12:50.641 /dev/nbd3 00:12:50.641 /dev/nbd4 00:12:50.641 /dev/nbd5 00:12:50.641 /dev/nbd6 00:12:50.641 /dev/nbd7 00:12:50.641 /dev/nbd8 00:12:50.641 /dev/nbd9' 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@65 -- # count=16 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@66 -- # echo 16 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@95 -- # count=16 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:50.641 256+0 records in 00:12:50.641 256+0 records out 00:12:50.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00657365 s, 160 MB/s 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:50.641 05:10:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:50.642 256+0 records in 00:12:50.642 256+0 records out 00:12:50.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1599 s, 6.6 MB/s 00:12:50.642 05:10:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:50.642 05:10:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:50.900 256+0 records in 00:12:50.900 256+0 records out 00:12:50.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167903 s, 6.2 MB/s 00:12:50.900 05:10:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:50.900 05:10:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:51.158 256+0 records in 00:12:51.158 256+0 records out 00:12:51.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168378 s, 6.2 MB/s 00:12:51.158 05:10:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.158 05:10:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:51.158 256+0 records in 00:12:51.158 256+0 records out 00:12:51.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169353 s, 6.2 MB/s 00:12:51.158 05:10:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.158 05:10:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:51.417 256+0 records in 00:12:51.417 256+0 records out 00:12:51.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165982 s, 6.3 MB/s 00:12:51.417 05:10:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.417 05:10:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:51.675 256+0 records in 00:12:51.675 256+0 records out 00:12:51.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165313 s, 6.3 MB/s 00:12:51.675 05:10:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.675 05:10:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:51.675 256+0 records in 00:12:51.675 256+0 records out 00:12:51.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164501 s, 6.4 MB/s 00:12:51.675 05:10:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.675 05:10:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:51.933 256+0 records in 00:12:51.933 256+0 records out 00:12:51.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164402 s, 6.4 MB/s 00:12:51.933 05:10:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.933 05:10:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:51.933 256+0 records in 00:12:51.933 256+0 records out 00:12:51.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155328 s, 6.8 MB/s 00:12:51.933 05:10:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.933 05:10:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:52.191 256+0 records in 00:12:52.191 256+0 records out 00:12:52.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171721 s, 6.1 MB/s 00:12:52.191 05:10:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.191 05:10:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:52.450 256+0 records in 00:12:52.450 256+0 records out 00:12:52.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166743 s, 6.3 MB/s 00:12:52.450 05:10:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.450 05:10:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:52.450 256+0 records in 00:12:52.450 256+0 records out 00:12:52.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167544 s, 6.3 MB/s 00:12:52.450 05:10:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.450 05:10:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:52.708 256+0 records in 00:12:52.708 256+0 records out 00:12:52.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16408 s, 6.4 MB/s 00:12:52.708 05:10:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.708 05:10:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:52.966 256+0 records in 00:12:52.966 256+0 records out 00:12:52.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168106 s, 6.2 MB/s 00:12:52.966 05:10:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.966 05:10:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:52.966 256+0 records in 00:12:52.966 256+0 records out 00:12:52.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169153 s, 6.2 MB/s 00:12:52.966 05:10:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.966 05:10:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:53.225 256+0 records in 00:12:53.225 256+0 records out 00:12:53.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.256141 s, 4.1 MB/s 00:12:53.225 05:10:12 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:53.225 05:10:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:53.225 05:10:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:53.225 05:10:12 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:53.225 05:10:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:53.225 05:10:12 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:53.225 05:10:12 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:53.225 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.225 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:53.225 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.225 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@51 -- # local i 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.485 05:10:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:53.743 05:10:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:53.743 05:10:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:53.743 05:10:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:53.743 05:10:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.743 05:10:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.743 05:10:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:53.743 05:10:12 -- bdev/nbd_common.sh@41 -- # break 00:12:53.743 05:10:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.743 05:10:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.743 05:10:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:54.002 05:10:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:54.002 05:10:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:54.002 05:10:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:54.002 05:10:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.002 05:10:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.002 05:10:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:54.002 05:10:13 -- bdev/nbd_common.sh@41 -- # break 00:12:54.003 05:10:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.003 05:10:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.003 05:10:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:54.261 05:10:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:54.261 05:10:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:54.261 05:10:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:54.261 05:10:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.261 05:10:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.261 05:10:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:54.261 05:10:13 -- bdev/nbd_common.sh@41 -- # break 00:12:54.261 05:10:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.261 05:10:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.261 05:10:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:54.518 05:10:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:54.518 05:10:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:54.518 05:10:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:54.518 05:10:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.518 05:10:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.518 05:10:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:54.518 05:10:13 -- bdev/nbd_common.sh@41 -- # break 00:12:54.518 05:10:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.518 05:10:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.518 05:10:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:54.775 05:10:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:54.775 05:10:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:54.775 05:10:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:54.775 05:10:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.775 05:10:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.775 05:10:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:54.775 05:10:13 -- bdev/nbd_common.sh@41 -- # break 00:12:54.775 05:10:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.775 05:10:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.775 05:10:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:55.033 05:10:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:55.033 05:10:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:55.033 05:10:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:55.033 05:10:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.033 05:10:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.033 05:10:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:55.033 05:10:14 -- bdev/nbd_common.sh@41 -- # break 00:12:55.033 05:10:14 -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.033 05:10:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.033 05:10:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:55.291 05:10:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:55.291 05:10:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:55.291 05:10:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:55.291 05:10:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.291 05:10:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.291 05:10:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:55.291 05:10:14 -- bdev/nbd_common.sh@41 -- # break 00:12:55.291 05:10:14 -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.291 05:10:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.291 05:10:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:55.549 05:10:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:55.549 05:10:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:55.550 05:10:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:55.550 05:10:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.550 05:10:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.550 05:10:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:55.550 05:10:14 -- bdev/nbd_common.sh@41 -- # break 00:12:55.550 05:10:14 -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.550 05:10:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.550 05:10:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:55.809 05:10:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:55.809 05:10:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:55.809 05:10:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:55.809 05:10:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.809 05:10:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.809 05:10:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:56.068 05:10:14 -- bdev/nbd_common.sh@41 -- # break 00:12:56.068 05:10:14 -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.068 05:10:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.068 05:10:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:56.068 05:10:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:56.068 05:10:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:56.068 05:10:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:56.068 05:10:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.068 05:10:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.068 05:10:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:56.068 05:10:15 -- bdev/nbd_common.sh@41 -- # break 00:12:56.068 05:10:15 -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.068 05:10:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.068 05:10:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:56.327 05:10:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:56.327 05:10:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:56.327 05:10:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:56.327 05:10:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.327 05:10:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.327 05:10:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:56.327 05:10:15 -- bdev/nbd_common.sh@41 -- # break 00:12:56.327 05:10:15 -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.327 05:10:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.327 05:10:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:56.586 05:10:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:56.586 05:10:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:56.586 05:10:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:56.586 05:10:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.586 05:10:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.586 05:10:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:56.586 05:10:15 -- bdev/nbd_common.sh@41 -- # break 00:12:56.586 05:10:15 -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.586 05:10:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.586 05:10:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:56.845 05:10:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:56.845 05:10:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:56.845 05:10:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:56.845 05:10:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.845 05:10:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.845 05:10:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:56.845 05:10:15 -- bdev/nbd_common.sh@41 -- # break 00:12:56.845 05:10:15 -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.845 05:10:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.845 05:10:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:57.104 05:10:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:57.104 05:10:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:57.104 05:10:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:57.104 05:10:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.104 05:10:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.104 05:10:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:57.104 05:10:16 -- bdev/nbd_common.sh@41 -- # break 00:12:57.104 05:10:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.104 05:10:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.104 05:10:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:57.363 05:10:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:57.363 05:10:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:57.363 05:10:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:57.363 05:10:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.363 05:10:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.363 05:10:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:57.363 05:10:16 -- bdev/nbd_common.sh@41 -- # break 00:12:57.363 05:10:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.363 05:10:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.363 05:10:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:57.622 05:10:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:57.622 05:10:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:57.622 05:10:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:57.622 05:10:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.622 05:10:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.622 05:10:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:57.622 05:10:16 -- bdev/nbd_common.sh@41 -- # break 00:12:57.622 05:10:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.622 05:10:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:57.622 05:10:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.622 05:10:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@65 -- # true 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@65 -- # count=0 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@104 -- # count=0 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@109 -- # return 0 00:12:57.881 05:10:16 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:57.881 05:10:16 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:58.140 malloc_lvol_verify 00:12:58.140 05:10:17 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:58.399 1b2d592d-817d-4ef1-a9fd-a7c1331d4322 00:12:58.399 05:10:17 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:58.658 7916b6dd-94e4-477e-ac92-eb9d614f3263 00:12:58.658 05:10:17 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:58.917 /dev/nbd0 00:12:58.917 05:10:17 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:58.917 mke2fs 1.47.0 (5-Feb-2023) 00:12:58.917 00:12:58.917 Filesystem too small for a journal 00:12:58.917 Discarding device blocks: 0/1024 done 00:12:58.917 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:58.917 00:12:58.917 Allocating group tables: 0/1 done 00:12:58.917 Writing inode tables: 0/1 done 00:12:58.917 Writing superblocks and filesystem accounting information: 0/1 done 00:12:58.917 00:12:58.917 05:10:17 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:58.917 05:10:17 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:58.917 05:10:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.917 05:10:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:58.917 05:10:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.917 05:10:17 -- bdev/nbd_common.sh@51 -- # local i 00:12:58.917 05:10:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.917 05:10:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:59.176 05:10:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.177 05:10:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.177 05:10:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.177 05:10:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.177 05:10:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.177 05:10:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.177 05:10:18 -- bdev/nbd_common.sh@41 -- # break 00:12:59.177 05:10:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.177 05:10:18 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:59.177 05:10:18 -- bdev/nbd_common.sh@147 -- # return 0 00:12:59.177 05:10:18 -- bdev/blockdev.sh@324 -- # killprocess 65639 00:12:59.177 05:10:18 -- common/autotest_common.sh@926 -- # '[' -z 65639 ']' 00:12:59.177 05:10:18 -- common/autotest_common.sh@930 -- # kill -0 65639 00:12:59.177 05:10:18 -- common/autotest_common.sh@931 -- # uname 00:12:59.177 05:10:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:59.177 05:10:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65639 00:12:59.177 05:10:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:59.177 05:10:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:59.177 killing process with pid 65639 00:12:59.177 05:10:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65639' 00:12:59.177 05:10:18 -- common/autotest_common.sh@945 -- # kill 65639 00:12:59.177 05:10:18 -- common/autotest_common.sh@950 -- # wait 65639 00:13:01.082 05:10:20 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:13:01.082 00:13:01.082 real 0m25.332s 00:13:01.082 user 0m34.974s 00:13:01.082 sys 0m8.939s 00:13:01.082 05:10:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.082 ************************************ 00:13:01.082 END TEST bdev_nbd 00:13:01.082 05:10:20 -- common/autotest_common.sh@10 -- # set +x 00:13:01.082 ************************************ 00:13:01.082 05:10:20 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:13:01.082 05:10:20 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:13:01.082 05:10:20 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:13:01.082 05:10:20 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:13:01.082 05:10:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:01.082 05:10:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:01.082 05:10:20 -- common/autotest_common.sh@10 -- # set +x 00:13:01.341 ************************************ 00:13:01.341 START TEST bdev_fio 00:13:01.341 ************************************ 00:13:01.341 05:10:20 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@329 -- # local env_context 00:13:01.341 05:10:20 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:01.341 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:01.341 05:10:20 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:01.341 05:10:20 -- bdev/blockdev.sh@337 -- # echo '' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:13:01.341 05:10:20 -- bdev/blockdev.sh@337 -- # env_context= 00:13:01.341 05:10:20 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:01.341 05:10:20 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:01.341 05:10:20 -- common/autotest_common.sh@1260 -- # local workload=verify 00:13:01.341 05:10:20 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:13:01.341 05:10:20 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:01.341 05:10:20 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:01.341 05:10:20 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:01.341 05:10:20 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:13:01.341 05:10:20 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:01.341 05:10:20 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:01.341 05:10:20 -- common/autotest_common.sh@1280 -- # cat 00:13:01.341 05:10:20 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:13:01.341 05:10:20 -- common/autotest_common.sh@1293 -- # cat 00:13:01.341 05:10:20 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:13:01.341 05:10:20 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:13:01.341 05:10:20 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:01.341 05:10:20 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.341 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:13:01.341 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:13:01.341 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.342 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:13:01.342 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:13:01.342 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.342 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:13:01.342 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:13:01.342 05:10:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:01.342 05:10:20 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:13:01.342 05:10:20 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:13:01.342 05:10:20 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:01.342 05:10:20 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.342 05:10:20 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:01.342 05:10:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:01.342 05:10:20 -- common/autotest_common.sh@10 -- # set +x 00:13:01.342 ************************************ 00:13:01.342 START TEST bdev_fio_rw_verify 00:13:01.342 ************************************ 00:13:01.342 05:10:20 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.342 05:10:20 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.342 05:10:20 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:01.342 05:10:20 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:01.342 05:10:20 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:01.342 05:10:20 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:01.342 05:10:20 -- common/autotest_common.sh@1320 -- # shift 00:13:01.342 05:10:20 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:01.342 05:10:20 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:01.342 05:10:20 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:01.342 05:10:20 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:01.342 05:10:20 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:01.342 05:10:20 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:13:01.342 05:10:20 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:13:01.342 05:10:20 -- common/autotest_common.sh@1326 -- # break 00:13:01.342 05:10:20 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:01.342 05:10:20 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.601 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.601 fio-3.35 00:13:01.601 Starting 16 threads 00:13:13.831 00:13:13.831 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=66778: Fri Jul 26 05:10:31 2024 00:13:13.831 read: IOPS=83.0k, BW=324MiB/s (340MB/s)(3241MiB/10002msec) 00:13:13.831 slat (usec): min=2, max=13039, avg=34.43, stdev=232.71 00:13:13.831 clat (usec): min=10, max=14360, avg=277.22, stdev=689.38 00:13:13.831 lat (usec): min=29, max=14365, avg=311.65, stdev=725.50 00:13:13.831 clat percentiles (usec): 00:13:13.831 | 50.000th=[ 161], 99.000th=[ 4228], 99.900th=[ 7242], 99.990th=[ 9110], 00:13:13.831 | 99.999th=[13173] 00:13:13.831 write: IOPS=133k, BW=518MiB/s (543MB/s)(5107MiB/9860msec); 0 zone resets 00:13:13.831 slat (usec): min=4, max=14930, avg=58.15, stdev=309.19 00:13:13.831 clat (usec): min=9, max=15278, avg=347.98, stdev=763.51 00:13:13.831 lat (usec): min=42, max=15323, avg=406.14, stdev=820.37 00:13:13.831 clat percentiles (usec): 00:13:13.831 | 50.000th=[ 210], 99.000th=[ 4293], 99.900th=[ 7308], 99.990th=[10290], 00:13:13.831 | 99.999th=[14222] 00:13:13.831 bw ( KiB/s): min=364480, max=792960, per=98.63%, avg=523161.26, stdev=8266.27, samples=304 00:13:13.831 iops : min=91120, max=198240, avg=130790.16, stdev=2066.59, samples=304 00:13:13.831 lat (usec) : 10=0.01%, 20=0.01%, 50=0.61%, 100=14.41%, 250=58.08% 00:13:13.831 lat (usec) : 500=22.72%, 750=0.87%, 1000=0.13% 00:13:13.831 lat (msec) : 2=0.09%, 4=1.08%, 10=2.00%, 20=0.01% 00:13:13.831 cpu : usr=58.18%, sys=2.08%, ctx=232331, majf=0, minf=107729 00:13:13.831 IO depths : 1=11.3%, 2=24.1%, 4=51.7%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.831 complete : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.831 issued rwts: total=829779,1307457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.831 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:13.831 00:13:13.831 Run status group 0 (all jobs): 00:13:13.831 READ: bw=324MiB/s (340MB/s), 324MiB/s-324MiB/s (340MB/s-340MB/s), io=3241MiB (3399MB), run=10002-10002msec 00:13:13.831 WRITE: bw=518MiB/s (543MB/s), 518MiB/s-518MiB/s (543MB/s-543MB/s), io=5107MiB (5355MB), run=9860-9860msec 00:13:15.204 ----------------------------------------------------- 00:13:15.204 Suppressions used: 00:13:15.204 count bytes template 00:13:15.204 16 140 /usr/src/fio/parse.c 00:13:15.204 9964 956544 /usr/src/fio/iolog.c 00:13:15.204 1 904 libcrypto.so 00:13:15.204 ----------------------------------------------------- 00:13:15.204 00:13:15.204 00:13:15.204 real 0m13.762s 00:13:15.204 user 1m37.914s 00:13:15.204 sys 0m4.014s 00:13:15.204 05:10:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.204 05:10:34 -- common/autotest_common.sh@10 -- # set +x 00:13:15.204 ************************************ 00:13:15.204 END TEST bdev_fio_rw_verify 00:13:15.204 ************************************ 00:13:15.204 05:10:34 -- bdev/blockdev.sh@348 -- # rm -f 00:13:15.204 05:10:34 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.204 05:10:34 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:15.204 05:10:34 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.204 05:10:34 -- common/autotest_common.sh@1260 -- # local workload=trim 00:13:15.204 05:10:34 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:13:15.204 05:10:34 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:15.204 05:10:34 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:15.204 05:10:34 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:15.204 05:10:34 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:13:15.204 05:10:34 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:15.204 05:10:34 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.204 05:10:34 -- common/autotest_common.sh@1280 -- # cat 00:13:15.204 05:10:34 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:13:15.204 05:10:34 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:13:15.204 05:10:34 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:13:15.204 05:10:34 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:15.205 05:10:34 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "5473e72e-d8c1-4fe7-9e63-a8f903eb1731"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5473e72e-d8c1-4fe7-9e63-a8f903eb1731",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d30fe741-e2f0-53f6-b69f-a940b76820c8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d30fe741-e2f0-53f6-b69f-a940b76820c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "eed747de-69cd-5213-ac7c-9ac43d120948"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "eed747de-69cd-5213-ac7c-9ac43d120948",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "28a88ec2-9fa6-5cc3-a9fa-20aebe01dd9a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "28a88ec2-9fa6-5cc3-a9fa-20aebe01dd9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c34d5efb-2747-5f4b-9809-817060826f89"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c34d5efb-2747-5f4b-9809-817060826f89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "3ffa7bd1-b7a2-51fa-8426-8b0cc127c4ac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3ffa7bd1-b7a2-51fa-8426-8b0cc127c4ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "61d563fc-dd1c-51ed-92a2-af22a4e425aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "61d563fc-dd1c-51ed-92a2-af22a4e425aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "f48cf60e-fa3d-532d-89ab-9470cd0f3993"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f48cf60e-fa3d-532d-89ab-9470cd0f3993",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "ef427d45-90f7-5eb9-9371-637387d22b5a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef427d45-90f7-5eb9-9371-637387d22b5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "126f2ebd-5883-5905-aa38-fcb425847da9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "126f2ebd-5883-5905-aa38-fcb425847da9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "76a0998e-4d62-53ef-a1d7-342ffbe51e84"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "76a0998e-4d62-53ef-a1d7-342ffbe51e84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a823701c-3e9c-50d8-b101-0775f2f29432"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a823701c-3e9c-50d8-b101-0775f2f29432",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5c70fba3-4e6b-45fd-a10c-267337407493"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5c70fba3-4e6b-45fd-a10c-267337407493",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5c70fba3-4e6b-45fd-a10c-267337407493",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2c8a2d26-ac83-42a0-b17e-95d4f517d9fa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "4cb4fba0-11b8-4757-bece-584061fe9d29",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "d33fa50a-c35d-4078-a456-7a6277e42d52"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d33fa50a-c35d-4078-a456-7a6277e42d52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d33fa50a-c35d-4078-a456-7a6277e42d52",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "758ac65d-852e-46dd-83c9-a6ba5bfcc3dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "989ed421-b73b-41af-aecf-5b5518febf9e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a101b393-3da3-4f70-8f81-0e581f9aad6f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a101b393-3da3-4f70-8f81-0e581f9aad6f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a101b393-3da3-4f70-8f81-0e581f9aad6f",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6aafa61c-9374-42f0-93f1-24faee44b682",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b595e94c-90a8-47d9-badd-a8920d664bde",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "97311939-422a-49f0-86ad-a2655938e59a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "97311939-422a-49f0-86ad-a2655938e59a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:15.205 05:10:34 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:13:15.205 Malloc1p0 00:13:15.205 Malloc1p1 00:13:15.205 Malloc2p0 00:13:15.205 Malloc2p1 00:13:15.205 Malloc2p2 00:13:15.205 Malloc2p3 00:13:15.205 Malloc2p4 00:13:15.205 Malloc2p5 00:13:15.205 Malloc2p6 00:13:15.205 Malloc2p7 00:13:15.205 TestPT 00:13:15.205 raid0 00:13:15.205 concat0 ]] 00:13:15.205 05:10:34 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:15.206 05:10:34 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "5473e72e-d8c1-4fe7-9e63-a8f903eb1731"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5473e72e-d8c1-4fe7-9e63-a8f903eb1731",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d30fe741-e2f0-53f6-b69f-a940b76820c8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d30fe741-e2f0-53f6-b69f-a940b76820c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "eed747de-69cd-5213-ac7c-9ac43d120948"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "eed747de-69cd-5213-ac7c-9ac43d120948",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "28a88ec2-9fa6-5cc3-a9fa-20aebe01dd9a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "28a88ec2-9fa6-5cc3-a9fa-20aebe01dd9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c34d5efb-2747-5f4b-9809-817060826f89"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c34d5efb-2747-5f4b-9809-817060826f89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "3ffa7bd1-b7a2-51fa-8426-8b0cc127c4ac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3ffa7bd1-b7a2-51fa-8426-8b0cc127c4ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "61d563fc-dd1c-51ed-92a2-af22a4e425aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "61d563fc-dd1c-51ed-92a2-af22a4e425aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "f48cf60e-fa3d-532d-89ab-9470cd0f3993"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f48cf60e-fa3d-532d-89ab-9470cd0f3993",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "ef427d45-90f7-5eb9-9371-637387d22b5a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef427d45-90f7-5eb9-9371-637387d22b5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "126f2ebd-5883-5905-aa38-fcb425847da9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "126f2ebd-5883-5905-aa38-fcb425847da9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "76a0998e-4d62-53ef-a1d7-342ffbe51e84"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "76a0998e-4d62-53ef-a1d7-342ffbe51e84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a823701c-3e9c-50d8-b101-0775f2f29432"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a823701c-3e9c-50d8-b101-0775f2f29432",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5c70fba3-4e6b-45fd-a10c-267337407493"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5c70fba3-4e6b-45fd-a10c-267337407493",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5c70fba3-4e6b-45fd-a10c-267337407493",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2c8a2d26-ac83-42a0-b17e-95d4f517d9fa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "4cb4fba0-11b8-4757-bece-584061fe9d29",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "d33fa50a-c35d-4078-a456-7a6277e42d52"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d33fa50a-c35d-4078-a456-7a6277e42d52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d33fa50a-c35d-4078-a456-7a6277e42d52",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "758ac65d-852e-46dd-83c9-a6ba5bfcc3dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "989ed421-b73b-41af-aecf-5b5518febf9e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a101b393-3da3-4f70-8f81-0e581f9aad6f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a101b393-3da3-4f70-8f81-0e581f9aad6f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a101b393-3da3-4f70-8f81-0e581f9aad6f",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6aafa61c-9374-42f0-93f1-24faee44b682",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b595e94c-90a8-47d9-badd-a8920d664bde",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "97311939-422a-49f0-86ad-a2655938e59a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "97311939-422a-49f0-86ad-a2655938e59a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:13:15.207 05:10:34 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.207 05:10:34 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:13:15.207 05:10:34 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:13:15.207 05:10:34 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:15.207 05:10:34 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:15.207 05:10:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.207 05:10:34 -- common/autotest_common.sh@10 -- # set +x 00:13:15.207 ************************************ 00:13:15.207 START TEST bdev_fio_trim 00:13:15.207 ************************************ 00:13:15.207 05:10:34 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:15.207 05:10:34 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:15.207 05:10:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:15.207 05:10:34 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:15.207 05:10:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:15.207 05:10:34 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:15.207 05:10:34 -- common/autotest_common.sh@1320 -- # shift 00:13:15.207 05:10:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:15.207 05:10:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:15.207 05:10:34 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:15.207 05:10:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:15.207 05:10:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:15.207 05:10:34 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:13:15.207 05:10:34 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:13:15.207 05:10:34 -- common/autotest_common.sh@1326 -- # break 00:13:15.207 05:10:34 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:15.207 05:10:34 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:15.465 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:15.465 fio-3.35 00:13:15.465 Starting 14 threads 00:13:27.679 00:13:27.679 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=66972: Fri Jul 26 05:10:45 2024 00:13:27.679 write: IOPS=167k, BW=651MiB/s (682MB/s)(6508MiB/10005msec); 0 zone resets 00:13:27.679 slat (usec): min=3, max=10050, avg=30.25, stdev=189.03 00:13:27.679 clat (usec): min=24, max=11213, avg=213.91, stdev=500.39 00:13:27.679 lat (usec): min=39, max=11241, avg=244.16, stdev=533.53 00:13:27.679 clat percentiles (usec): 00:13:27.679 | 50.000th=[ 143], 99.000th=[ 4113], 99.900th=[ 5276], 99.990th=[ 7242], 00:13:27.679 | 99.999th=[10159] 00:13:27.679 bw ( KiB/s): min=491102, max=817539, per=100.00%, avg=666910.68, stdev=8862.95, samples=266 00:13:27.679 iops : min=122775, max=204384, avg=166727.47, stdev=2215.74, samples=266 00:13:27.679 trim: IOPS=167k, BW=651MiB/s (682MB/s)(6508MiB/10005msec); 0 zone resets 00:13:27.679 slat (usec): min=4, max=10039, avg=20.30, stdev=155.61 00:13:27.679 clat (usec): min=4, max=11242, avg=227.17, stdev=519.13 00:13:27.679 lat (usec): min=14, max=11259, avg=247.47, stdev=541.39 00:13:27.679 clat percentiles (usec): 00:13:27.679 | 50.000th=[ 159], 99.000th=[ 4146], 99.900th=[ 6128], 99.990th=[ 7242], 00:13:27.679 | 99.999th=[10159] 00:13:27.679 bw ( KiB/s): min=491102, max=817539, per=100.00%, avg=666910.68, stdev=8862.21, samples=266 00:13:27.679 iops : min=122775, max=204384, avg=166727.47, stdev=2215.55, samples=266 00:13:27.679 lat (usec) : 10=0.11%, 20=0.32%, 50=1.11%, 100=15.42%, 250=76.89% 00:13:27.679 lat (usec) : 500=4.20%, 750=0.24%, 1000=0.02% 00:13:27.679 lat (msec) : 2=0.03%, 4=0.56%, 10=1.09%, 20=0.01% 00:13:27.679 cpu : usr=68.44%, sys=1.04%, ctx=150406, majf=0, minf=15801 00:13:27.679 IO depths : 1=12.3%, 2=24.5%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.679 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.679 issued rwts: total=0,1666144,1666147,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.679 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:27.679 00:13:27.679 Run status group 0 (all jobs): 00:13:27.679 WRITE: bw=651MiB/s (682MB/s), 651MiB/s-651MiB/s (682MB/s-682MB/s), io=6508MiB (6825MB), run=10005-10005msec 00:13:27.679 TRIM: bw=651MiB/s (682MB/s), 651MiB/s-651MiB/s (682MB/s-682MB/s), io=6508MiB (6825MB), run=10005-10005msec 00:13:28.615 ----------------------------------------------------- 00:13:28.615 Suppressions used: 00:13:28.615 count bytes template 00:13:28.615 14 129 /usr/src/fio/parse.c 00:13:28.615 1 904 libcrypto.so 00:13:28.615 ----------------------------------------------------- 00:13:28.615 00:13:28.615 00:13:28.615 real 0m13.510s 00:13:28.615 user 1m39.998s 00:13:28.615 sys 0m2.499s 00:13:28.615 05:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.615 05:10:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.615 ************************************ 00:13:28.615 END TEST bdev_fio_trim 00:13:28.615 ************************************ 00:13:28.615 05:10:47 -- bdev/blockdev.sh@366 -- # rm -f 00:13:28.615 05:10:47 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.873 /home/vagrant/spdk_repo/spdk 00:13:28.873 05:10:47 -- bdev/blockdev.sh@368 -- # popd 00:13:28.873 05:10:47 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:13:28.873 00:13:28.873 real 0m27.530s 00:13:28.873 user 3m18.010s 00:13:28.873 sys 0m6.649s 00:13:28.873 05:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.873 ************************************ 00:13:28.873 END TEST bdev_fio 00:13:28.873 05:10:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.873 ************************************ 00:13:28.873 05:10:47 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:28.873 05:10:47 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:28.873 05:10:47 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:28.873 05:10:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.873 05:10:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.873 ************************************ 00:13:28.873 START TEST bdev_verify 00:13:28.873 ************************************ 00:13:28.873 05:10:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:28.873 [2024-07-26 05:10:47.852897] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:28.873 [2024-07-26 05:10:47.853162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67148 ] 00:13:29.132 [2024-07-26 05:10:48.028651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:29.391 [2024-07-26 05:10:48.257796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.391 [2024-07-26 05:10:48.257813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.651 [2024-07-26 05:10:48.602440] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:29.651 [2024-07-26 05:10:48.602541] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:29.651 [2024-07-26 05:10:48.610407] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:29.651 [2024-07-26 05:10:48.610471] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:29.651 [2024-07-26 05:10:48.618431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:29.651 [2024-07-26 05:10:48.618486] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:29.651 [2024-07-26 05:10:48.618516] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:29.910 [2024-07-26 05:10:48.787727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:29.910 [2024-07-26 05:10:48.787880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.910 [2024-07-26 05:10:48.787911] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:13:29.910 [2024-07-26 05:10:48.787925] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.910 [2024-07-26 05:10:48.790634] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.910 [2024-07-26 05:10:48.790692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:30.168 Running I/O for 5 seconds... 00:13:35.436 00:13:35.436 Latency(us) 00:13:35.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.436 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x1000 00:13:35.436 Malloc0 : 5.16 1630.73 6.37 0.00 0.00 77840.89 2085.24 221154.21 00:13:35.436 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x1000 length 0x1000 00:13:35.436 Malloc0 : 5.18 1570.83 6.14 0.00 0.00 81022.75 2278.87 224967.21 00:13:35.436 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x800 00:13:35.436 Malloc1p0 : 5.16 1132.37 4.42 0.00 0.00 111982.37 4081.11 135361.63 00:13:35.436 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x800 length 0x800 00:13:35.436 Malloc1p0 : 5.18 1095.19 4.28 0.00 0.00 116103.96 4081.11 137268.13 00:13:35.436 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x800 00:13:35.436 Malloc1p1 : 5.16 1132.07 4.42 0.00 0.00 111841.36 4200.26 131548.63 00:13:35.436 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x800 length 0x800 00:13:35.436 Malloc1p1 : 5.19 1094.88 4.28 0.00 0.00 115957.67 4289.63 133455.13 00:13:35.436 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x200 00:13:35.436 Malloc2p0 : 5.16 1131.76 4.42 0.00 0.00 111679.46 4319.42 126782.37 00:13:35.436 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x200 length 0x200 00:13:35.436 Malloc2p0 : 5.19 1094.58 4.28 0.00 0.00 115782.61 4230.05 128688.87 00:13:35.436 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x200 00:13:35.436 Malloc2p1 : 5.17 1131.45 4.42 0.00 0.00 111534.09 3902.37 122969.37 00:13:35.436 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x200 length 0x200 00:13:35.436 Malloc2p1 : 5.19 1094.10 4.27 0.00 0.00 115632.86 3902.37 124875.87 00:13:35.436 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x200 00:13:35.436 Malloc2p2 : 5.17 1131.16 4.42 0.00 0.00 111380.50 4200.26 118679.74 00:13:35.436 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x200 length 0x200 00:13:35.436 Malloc2p2 : 5.19 1093.59 4.27 0.00 0.00 115482.84 4140.68 120586.24 00:13:35.436 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x200 00:13:35.436 Malloc2p3 : 5.17 1130.85 4.42 0.00 0.00 111236.43 3991.74 114390.11 00:13:35.436 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x200 length 0x200 00:13:35.436 Malloc2p3 : 5.19 1093.07 4.27 0.00 0.00 115323.35 4051.32 116296.61 00:13:35.436 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x200 00:13:35.436 Malloc2p4 : 5.17 1130.52 4.42 0.00 0.00 111059.60 4349.21 109623.85 00:13:35.436 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x200 length 0x200 00:13:35.436 Malloc2p4 : 5.20 1092.57 4.27 0.00 0.00 115124.56 4289.63 111530.36 00:13:35.436 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x200 00:13:35.436 Malloc2p5 : 5.17 1130.22 4.41 0.00 0.00 110896.28 4319.42 105334.23 00:13:35.436 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x200 length 0x200 00:13:35.436 Malloc2p5 : 5.20 1092.12 4.27 0.00 0.00 114979.49 4617.31 106287.48 00:13:35.436 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x200 00:13:35.436 Malloc2p6 : 5.19 1144.60 4.47 0.00 0.00 109799.93 4319.42 100567.97 00:13:35.436 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x200 length 0x200 00:13:35.436 Malloc2p6 : 5.20 1091.55 4.26 0.00 0.00 114804.46 4438.57 101997.85 00:13:35.436 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x200 00:13:35.436 Malloc2p7 : 5.19 1144.09 4.47 0.00 0.00 109661.61 4051.32 96278.34 00:13:35.436 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x200 length 0x200 00:13:35.436 Malloc2p7 : 5.20 1091.15 4.26 0.00 0.00 114632.78 4349.21 97231.59 00:13:35.436 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x0 length 0x1000 00:13:35.436 TestPT : 5.19 1132.81 4.43 0.00 0.00 110528.11 6047.19 95801.72 00:13:35.436 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.436 Verification LBA range: start 0x1000 length 0x1000 00:13:35.436 TestPT : 5.20 1074.74 4.20 0.00 0.00 116138.79 30504.03 97708.22 00:13:35.436 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.437 Verification LBA range: start 0x0 length 0x2000 00:13:35.437 raid0 : 5.19 1143.10 4.47 0.00 0.00 109274.13 3902.37 84362.71 00:13:35.437 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.437 Verification LBA range: start 0x2000 length 0x2000 00:13:35.437 raid0 : 5.21 1090.63 4.26 0.00 0.00 114256.36 4230.05 87222.46 00:13:35.437 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.437 Verification LBA range: start 0x0 length 0x2000 00:13:35.437 concat0 : 5.20 1142.60 4.46 0.00 0.00 109127.88 4498.15 81026.33 00:13:35.437 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.437 Verification LBA range: start 0x2000 length 0x2000 00:13:35.437 concat0 : 5.21 1090.39 4.26 0.00 0.00 114079.26 4438.57 84839.33 00:13:35.437 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.437 Verification LBA range: start 0x0 length 0x1000 00:13:35.437 raid1 : 5.20 1142.10 4.46 0.00 0.00 108952.39 4766.25 81026.33 00:13:35.437 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.437 Verification LBA range: start 0x1000 length 0x1000 00:13:35.437 raid1 : 5.21 1090.10 4.26 0.00 0.00 113917.58 4498.15 85315.96 00:13:35.437 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.437 Verification LBA range: start 0x0 length 0x4e2 00:13:35.437 AIO0 : 5.20 1141.63 4.46 0.00 0.00 108780.83 4140.68 81502.95 00:13:35.437 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.437 Verification LBA range: start 0x4e2 length 0x4e2 00:13:35.437 AIO0 : 5.21 1089.87 4.26 0.00 0.00 113723.79 4349.21 85315.96 00:13:35.437 =================================================================================================================== 00:13:35.437 Total : 36611.42 143.01 0.00 0.00 109836.39 2085.24 224967.21 00:13:37.971 00:13:37.971 real 0m8.683s 00:13:37.971 user 0m15.671s 00:13:37.971 sys 0m0.578s 00:13:37.971 ************************************ 00:13:37.971 END TEST bdev_verify 00:13:37.971 ************************************ 00:13:37.971 05:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.971 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:37.971 05:10:56 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:37.971 05:10:56 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:37.971 05:10:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:37.971 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:37.971 ************************************ 00:13:37.971 START TEST bdev_verify_big_io 00:13:37.971 ************************************ 00:13:37.971 05:10:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:37.971 [2024-07-26 05:10:56.577934] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:37.971 [2024-07-26 05:10:56.578131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67263 ] 00:13:37.971 [2024-07-26 05:10:56.749713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:37.971 [2024-07-26 05:10:56.929976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.971 [2024-07-26 05:10:56.929994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.230 [2024-07-26 05:10:57.260811] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:38.230 [2024-07-26 05:10:57.260902] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:38.230 [2024-07-26 05:10:57.268779] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:38.230 [2024-07-26 05:10:57.268845] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:38.230 [2024-07-26 05:10:57.276802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:38.230 [2024-07-26 05:10:57.276845] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:38.230 [2024-07-26 05:10:57.276896] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:38.489 [2024-07-26 05:10:57.449374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:38.489 [2024-07-26 05:10:57.449475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.489 [2024-07-26 05:10:57.449502] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:13:38.489 [2024-07-26 05:10:57.449515] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.489 [2024-07-26 05:10:57.452189] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.489 [2024-07-26 05:10:57.452236] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:38.749 [2024-07-26 05:10:57.760541] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.763593] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.766839] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.770450] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.773490] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.776785] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.779947] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.783553] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.786721] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.790049] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.793251] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.796598] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.799793] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.803340] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.806681] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:38.749 [2024-07-26 05:10:57.809825] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:39.009 [2024-07-26 05:10:57.881441] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:39.009 [2024-07-26 05:10:57.887480] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:39.009 Running I/O for 5 seconds... 00:13:45.587 00:13:45.587 Latency(us) 00:13:45.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.587 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x0 length 0x100 00:13:45.587 Malloc0 : 5.73 305.86 19.12 0.00 0.00 409141.17 28597.53 1067641.02 00:13:45.587 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x100 length 0x100 00:13:45.587 Malloc0 : 5.73 306.65 19.17 0.00 0.00 409716.20 26691.03 1265917.21 00:13:45.587 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x0 length 0x80 00:13:45.587 Malloc1p0 : 5.74 219.59 13.72 0.00 0.00 562466.46 49807.36 1288795.23 00:13:45.587 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x80 length 0x80 00:13:45.587 Malloc1p0 : 5.82 185.14 11.57 0.00 0.00 663386.25 48377.48 1151527.10 00:13:45.587 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x0 length 0x80 00:13:45.587 Malloc1p1 : 5.96 106.15 6.63 0.00 0.00 1125966.46 50045.67 2409818.30 00:13:45.587 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x80 length 0x80 00:13:45.587 Malloc1p1 : 5.97 111.86 6.99 0.00 0.00 1068965.47 48377.48 2303054.20 00:13:45.587 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x0 length 0x20 00:13:45.587 Malloc2p0 : 5.74 57.13 3.57 0.00 0.00 520609.96 8519.68 861738.82 00:13:45.587 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x20 length 0x20 00:13:45.587 Malloc2p0 : 5.73 60.90 3.81 0.00 0.00 490984.14 8817.57 751161.72 00:13:45.587 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x0 length 0x20 00:13:45.587 Malloc2p1 : 5.74 57.11 3.57 0.00 0.00 517986.24 8698.41 842673.80 00:13:45.587 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x20 length 0x20 00:13:45.587 Malloc2p1 : 5.73 60.88 3.81 0.00 0.00 488401.71 8817.57 735909.70 00:13:45.587 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x0 length 0x20 00:13:45.587 Malloc2p2 : 5.75 57.09 3.57 0.00 0.00 515277.53 8102.63 827421.79 00:13:45.587 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x20 length 0x20 00:13:45.587 Malloc2p2 : 5.73 60.87 3.80 0.00 0.00 486124.33 8340.95 720657.69 00:13:45.587 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x0 length 0x20 00:13:45.587 Malloc2p3 : 5.75 57.07 3.57 0.00 0.00 512825.38 8043.05 812169.77 00:13:45.587 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x20 length 0x20 00:13:45.587 Malloc2p3 : 5.74 60.85 3.80 0.00 0.00 483774.49 8340.95 705405.67 00:13:45.587 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x0 length 0x20 00:13:45.587 Malloc2p4 : 5.75 57.06 3.57 0.00 0.00 510354.52 8340.95 793104.76 00:13:45.587 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:45.587 Verification LBA range: start 0x20 length 0x20 00:13:45.587 Malloc2p4 : 5.74 60.83 3.80 0.00 0.00 481671.50 8519.68 686340.65 00:13:45.587 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x0 length 0x20 00:13:45.588 Malloc2p5 : 5.75 57.05 3.57 0.00 0.00 507747.24 8757.99 774039.74 00:13:45.588 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x20 length 0x20 00:13:45.588 Malloc2p5 : 5.74 60.80 3.80 0.00 0.00 479095.90 8936.73 667275.64 00:13:45.588 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x0 length 0x20 00:13:45.588 Malloc2p6 : 5.75 57.04 3.56 0.00 0.00 505100.56 8460.10 758787.72 00:13:45.588 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x20 length 0x20 00:13:45.588 Malloc2p6 : 5.74 60.79 3.80 0.00 0.00 476630.23 8638.84 648210.62 00:13:45.588 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x0 length 0x20 00:13:45.588 Malloc2p7 : 5.81 60.05 3.75 0.00 0.00 481926.12 7506.85 739722.71 00:13:45.588 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x20 length 0x20 00:13:45.588 Malloc2p7 : 5.74 60.76 3.80 0.00 0.00 474307.88 7506.85 632958.60 00:13:45.588 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x0 length 0x100 00:13:45.588 TestPT : 6.00 111.27 6.95 0.00 0.00 1018137.35 16562.73 2379314.27 00:13:45.588 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x100 length 0x100 00:13:45.588 TestPT : 5.95 101.40 6.34 0.00 0.00 1109515.29 68157.44 2257298.15 00:13:45.588 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x0 length 0x200 00:13:45.588 raid0 : 6.02 115.53 7.22 0.00 0.00 962394.75 45756.04 2379314.27 00:13:45.588 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x200 length 0x200 00:13:45.588 raid0 : 5.92 117.49 7.34 0.00 0.00 955461.90 42896.29 2287802.18 00:13:45.588 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x0 length 0x200 00:13:45.588 concat0 : 5.97 121.29 7.58 0.00 0.00 901700.31 35508.60 2364062.25 00:13:45.588 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x200 length 0x200 00:13:45.588 concat0 : 5.98 121.11 7.57 0.00 0.00 908298.35 38844.97 2287802.18 00:13:45.588 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x0 length 0x100 00:13:45.588 raid1 : 5.99 133.73 8.36 0.00 0.00 805299.84 26095.24 2364062.25 00:13:45.588 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x100 length 0x100 00:13:45.588 raid1 : 5.98 138.64 8.67 0.00 0.00 785455.25 22520.55 2287802.18 00:13:45.588 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x0 length 0x4e 00:13:45.588 AIO0 : 6.03 158.27 9.89 0.00 0.00 408586.96 1407.53 1357429.29 00:13:45.588 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:45.588 Verification LBA range: start 0x4e length 0x4e 00:13:45.588 AIO0 : 5.98 141.29 8.83 0.00 0.00 462088.88 3768.32 1319299.26 00:13:45.588 =================================================================================================================== 00:13:45.588 Total : 3441.57 215.10 0.00 0.00 648460.16 1407.53 2409818.30 00:13:45.847 [2024-07-26 05:11:04.858434] thread.c:2244:spdk_io_device_unregister: *WARNING*: io_device bdev_Malloc3 (0x516000009681) has 119 for_each calls outstanding 00:13:47.751 ************************************ 00:13:47.751 END TEST bdev_verify_big_io 00:13:47.751 ************************************ 00:13:47.751 00:13:47.751 real 0m9.843s 00:13:47.751 user 0m18.172s 00:13:47.751 sys 0m0.518s 00:13:47.752 05:11:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.752 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:47.752 05:11:06 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:47.752 05:11:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:47.752 05:11:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.752 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:47.752 ************************************ 00:13:47.752 START TEST bdev_write_zeroes 00:13:47.752 ************************************ 00:13:47.752 05:11:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:47.752 [2024-07-26 05:11:06.470877] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:47.752 [2024-07-26 05:11:06.471100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67379 ] 00:13:47.752 [2024-07-26 05:11:06.640028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.752 [2024-07-26 05:11:06.821084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.319 [2024-07-26 05:11:07.149042] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:48.319 [2024-07-26 05:11:07.149160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:48.319 [2024-07-26 05:11:07.156993] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:48.319 [2024-07-26 05:11:07.157067] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:48.319 [2024-07-26 05:11:07.165011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:48.319 [2024-07-26 05:11:07.165077] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:48.319 [2024-07-26 05:11:07.165095] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:48.319 [2024-07-26 05:11:07.334338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:48.319 [2024-07-26 05:11:07.334654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.319 [2024-07-26 05:11:07.334719] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:13:48.319 [2024-07-26 05:11:07.334748] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.319 [2024-07-26 05:11:07.337324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.319 [2024-07-26 05:11:07.337367] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:48.578 Running I/O for 1 seconds... 00:13:49.955 00:13:49.955 Latency(us) 00:13:49.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.955 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 Malloc0 : 1.04 5166.28 20.18 0.00 0.00 24757.90 625.57 42896.29 00:13:49.955 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 Malloc1p0 : 1.04 5158.16 20.15 0.00 0.00 24759.92 737.28 42181.35 00:13:49.955 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 Malloc1p1 : 1.04 5151.08 20.12 0.00 0.00 24743.70 834.09 41466.41 00:13:49.955 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 Malloc2p0 : 1.05 5144.18 20.09 0.00 0.00 24729.08 733.56 40751.48 00:13:49.955 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 Malloc2p1 : 1.05 5137.23 20.07 0.00 0.00 24717.26 726.11 40036.54 00:13:49.955 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 Malloc2p2 : 1.05 5129.69 20.04 0.00 0.00 24706.77 718.66 39321.60 00:13:49.955 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 Malloc2p3 : 1.05 5122.42 20.01 0.00 0.00 24701.43 741.00 38844.97 00:13:49.955 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 Malloc2p4 : 1.05 5114.93 19.98 0.00 0.00 24687.77 800.58 38130.04 00:13:49.955 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 Malloc2p5 : 1.05 5107.83 19.95 0.00 0.00 24667.62 845.27 37176.79 00:13:49.955 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 Malloc2p6 : 1.05 5100.77 19.92 0.00 0.00 24653.09 789.41 36223.53 00:13:49.955 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 Malloc2p7 : 1.06 5093.33 19.90 0.00 0.00 24634.52 878.78 35270.28 00:13:49.955 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 TestPT : 1.06 5085.96 19.87 0.00 0.00 24618.15 871.33 34317.03 00:13:49.955 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 raid0 : 1.06 5077.24 19.83 0.00 0.00 24600.13 1690.53 32410.53 00:13:49.955 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 concat0 : 1.06 5068.60 19.80 0.00 0.00 24535.96 1675.64 30384.87 00:13:49.955 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 raid1 : 1.06 5057.85 19.76 0.00 0.00 24478.13 2695.91 27763.43 00:13:49.955 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.955 AIO0 : 1.07 5156.23 20.14 0.00 0.00 23883.38 618.12 27882.59 00:13:49.955 =================================================================================================================== 00:13:49.955 Total : 81871.77 319.81 0.00 0.00 24616.11 618.12 42896.29 00:13:51.858 00:13:51.858 real 0m4.428s 00:13:51.858 user 0m3.860s 00:13:51.858 sys 0m0.405s 00:13:51.858 05:11:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.858 05:11:10 -- common/autotest_common.sh@10 -- # set +x 00:13:51.858 ************************************ 00:13:51.858 END TEST bdev_write_zeroes 00:13:51.858 ************************************ 00:13:51.858 05:11:10 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.858 05:11:10 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:51.858 05:11:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:51.858 05:11:10 -- common/autotest_common.sh@10 -- # set +x 00:13:51.858 ************************************ 00:13:51.858 START TEST bdev_json_nonenclosed 00:13:51.858 ************************************ 00:13:51.859 05:11:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.859 [2024-07-26 05:11:10.955691] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:51.859 [2024-07-26 05:11:10.956194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67449 ] 00:13:52.117 [2024-07-26 05:11:11.130442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.376 [2024-07-26 05:11:11.370695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.376 [2024-07-26 05:11:11.370986] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:52.376 [2024-07-26 05:11:11.371365] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:52.943 ************************************ 00:13:52.943 END TEST bdev_json_nonenclosed 00:13:52.943 ************************************ 00:13:52.943 00:13:52.943 real 0m0.929s 00:13:52.943 user 0m0.697s 00:13:52.944 sys 0m0.131s 00:13:52.944 05:11:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.944 05:11:11 -- common/autotest_common.sh@10 -- # set +x 00:13:52.944 05:11:11 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.944 05:11:11 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:52.944 05:11:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.944 05:11:11 -- common/autotest_common.sh@10 -- # set +x 00:13:52.944 ************************************ 00:13:52.944 START TEST bdev_json_nonarray 00:13:52.944 ************************************ 00:13:52.944 05:11:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.944 [2024-07-26 05:11:11.935901] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:52.944 [2024-07-26 05:11:11.936169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67480 ] 00:13:53.202 [2024-07-26 05:11:12.105282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.202 [2024-07-26 05:11:12.298164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.202 [2024-07-26 05:11:12.298401] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:53.202 [2024-07-26 05:11:12.298432] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:53.770 00:13:53.770 real 0m0.854s 00:13:53.770 user 0m0.625s 00:13:53.770 sys 0m0.128s 00:13:53.770 05:11:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.770 05:11:12 -- common/autotest_common.sh@10 -- # set +x 00:13:53.770 ************************************ 00:13:53.770 END TEST bdev_json_nonarray 00:13:53.770 ************************************ 00:13:53.770 05:11:12 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:13:53.770 05:11:12 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:13:53.770 05:11:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:53.770 05:11:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:53.770 05:11:12 -- common/autotest_common.sh@10 -- # set +x 00:13:53.770 ************************************ 00:13:53.770 START TEST bdev_qos 00:13:53.770 ************************************ 00:13:53.770 05:11:12 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:13:53.770 05:11:12 -- bdev/blockdev.sh@444 -- # QOS_PID=67511 00:13:53.770 Process qos testing pid: 67511 00:13:53.770 05:11:12 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 67511' 00:13:53.770 05:11:12 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:53.770 05:11:12 -- bdev/blockdev.sh@447 -- # waitforlisten 67511 00:13:53.770 05:11:12 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:53.770 05:11:12 -- common/autotest_common.sh@819 -- # '[' -z 67511 ']' 00:13:53.770 05:11:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.770 05:11:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:53.770 05:11:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.770 05:11:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:53.770 05:11:12 -- common/autotest_common.sh@10 -- # set +x 00:13:53.770 [2024-07-26 05:11:12.845397] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:53.770 [2024-07-26 05:11:12.845578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67511 ] 00:13:54.028 [2024-07-26 05:11:13.022626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.286 [2024-07-26 05:11:13.261033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.850 05:11:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:54.850 05:11:13 -- common/autotest_common.sh@852 -- # return 0 00:13:54.850 05:11:13 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:54.850 05:11:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.850 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:54.850 Malloc_0 00:13:54.850 05:11:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.850 05:11:13 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:13:54.850 05:11:13 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:13:54.850 05:11:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:54.850 05:11:13 -- common/autotest_common.sh@889 -- # local i 00:13:54.850 05:11:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:54.850 05:11:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:54.850 05:11:13 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:54.850 05:11:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.850 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:54.850 05:11:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.850 05:11:13 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:54.850 05:11:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.850 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:54.850 [ 00:13:54.850 { 00:13:54.850 "name": "Malloc_0", 00:13:54.850 "aliases": [ 00:13:54.850 "e78d759f-ddac-454e-a34a-23a0cd61a13f" 00:13:54.850 ], 00:13:54.850 "product_name": "Malloc disk", 00:13:54.850 "block_size": 512, 00:13:54.850 "num_blocks": 262144, 00:13:54.850 "uuid": "e78d759f-ddac-454e-a34a-23a0cd61a13f", 00:13:54.850 "assigned_rate_limits": { 00:13:54.850 "rw_ios_per_sec": 0, 00:13:54.850 "rw_mbytes_per_sec": 0, 00:13:54.850 "r_mbytes_per_sec": 0, 00:13:54.850 "w_mbytes_per_sec": 0 00:13:54.850 }, 00:13:54.850 "claimed": false, 00:13:54.850 "zoned": false, 00:13:54.850 "supported_io_types": { 00:13:54.850 "read": true, 00:13:54.850 "write": true, 00:13:54.850 "unmap": true, 00:13:54.850 "write_zeroes": true, 00:13:54.850 "flush": true, 00:13:54.850 "reset": true, 00:13:54.850 "compare": false, 00:13:54.850 "compare_and_write": false, 00:13:54.850 "abort": true, 00:13:54.850 "nvme_admin": false, 00:13:54.850 "nvme_io": false 00:13:54.850 }, 00:13:54.850 "memory_domains": [ 00:13:54.850 { 00:13:54.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.850 "dma_device_type": 2 00:13:54.850 } 00:13:54.850 ], 00:13:54.850 "driver_specific": {} 00:13:54.850 } 00:13:54.850 ] 00:13:54.850 05:11:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.850 05:11:13 -- common/autotest_common.sh@895 -- # return 0 00:13:54.850 05:11:13 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:54.850 05:11:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.850 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:54.850 Null_1 00:13:54.850 05:11:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.850 05:11:13 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:13:54.850 05:11:13 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:13:54.850 05:11:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:54.850 05:11:13 -- common/autotest_common.sh@889 -- # local i 00:13:54.850 05:11:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:54.850 05:11:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:54.850 05:11:13 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:54.850 05:11:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.850 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:55.108 05:11:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.108 05:11:13 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:55.108 05:11:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.108 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:55.108 [ 00:13:55.108 { 00:13:55.108 "name": "Null_1", 00:13:55.108 "aliases": [ 00:13:55.108 "f636f895-51e5-41d4-9a23-f3823284d347" 00:13:55.108 ], 00:13:55.108 "product_name": "Null disk", 00:13:55.108 "block_size": 512, 00:13:55.108 "num_blocks": 262144, 00:13:55.108 "uuid": "f636f895-51e5-41d4-9a23-f3823284d347", 00:13:55.108 "assigned_rate_limits": { 00:13:55.108 "rw_ios_per_sec": 0, 00:13:55.108 "rw_mbytes_per_sec": 0, 00:13:55.108 "r_mbytes_per_sec": 0, 00:13:55.108 "w_mbytes_per_sec": 0 00:13:55.108 }, 00:13:55.108 "claimed": false, 00:13:55.108 "zoned": false, 00:13:55.108 "supported_io_types": { 00:13:55.108 "read": true, 00:13:55.108 "write": true, 00:13:55.108 "unmap": false, 00:13:55.108 "write_zeroes": true, 00:13:55.108 "flush": false, 00:13:55.108 "reset": true, 00:13:55.108 "compare": false, 00:13:55.108 "compare_and_write": false, 00:13:55.108 "abort": true, 00:13:55.108 "nvme_admin": false, 00:13:55.108 "nvme_io": false 00:13:55.108 }, 00:13:55.108 "driver_specific": {} 00:13:55.108 } 00:13:55.108 ] 00:13:55.108 05:11:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.108 05:11:13 -- common/autotest_common.sh@895 -- # return 0 00:13:55.108 05:11:13 -- bdev/blockdev.sh@455 -- # qos_function_test 00:13:55.108 05:11:13 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:13:55.108 05:11:13 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:13:55.108 05:11:13 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:55.108 05:11:13 -- bdev/blockdev.sh@410 -- # local io_result=0 00:13:55.108 05:11:13 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:13:55.108 05:11:13 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:13:55.108 05:11:13 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:13:55.108 05:11:13 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:55.108 05:11:13 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:55.108 05:11:13 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:55.108 05:11:13 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:55.108 05:11:13 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:55.108 05:11:13 -- bdev/blockdev.sh@376 -- # tail -1 00:13:55.108 Running I/O for 60 seconds... 00:14:00.391 05:11:19 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 61247.17 244988.69 0.00 0.00 246784.00 0.00 0.00 ' 00:14:00.391 05:11:19 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:14:00.391 05:11:19 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:14:00.391 05:11:19 -- bdev/blockdev.sh@378 -- # iostat_result=61247.17 00:14:00.391 05:11:19 -- bdev/blockdev.sh@383 -- # echo 61247 00:14:00.391 05:11:19 -- bdev/blockdev.sh@414 -- # io_result=61247 00:14:00.391 05:11:19 -- bdev/blockdev.sh@416 -- # iops_limit=15000 00:14:00.391 05:11:19 -- bdev/blockdev.sh@417 -- # '[' 15000 -gt 1000 ']' 00:14:00.391 05:11:19 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 15000 Malloc_0 00:14:00.391 05:11:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.391 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:14:00.391 05:11:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.391 05:11:19 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 15000 IOPS Malloc_0 00:14:00.391 05:11:19 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:00.391 05:11:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:00.391 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:14:00.391 ************************************ 00:14:00.391 START TEST bdev_qos_iops 00:14:00.391 ************************************ 00:14:00.391 05:11:19 -- common/autotest_common.sh@1104 -- # run_qos_test 15000 IOPS Malloc_0 00:14:00.391 05:11:19 -- bdev/blockdev.sh@387 -- # local qos_limit=15000 00:14:00.391 05:11:19 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:00.391 05:11:19 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:14:00.391 05:11:19 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:14:00.391 05:11:19 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:00.391 05:11:19 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:00.391 05:11:19 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:00.391 05:11:19 -- bdev/blockdev.sh@376 -- # tail -1 00:14:00.391 05:11:19 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:05.660 05:11:24 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 14980.66 59922.62 0.00 0.00 60540.00 0.00 0.00 ' 00:14:05.660 05:11:24 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:14:05.660 05:11:24 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:14:05.660 05:11:24 -- bdev/blockdev.sh@378 -- # iostat_result=14980.66 00:14:05.660 05:11:24 -- bdev/blockdev.sh@383 -- # echo 14980 00:14:05.660 05:11:24 -- bdev/blockdev.sh@390 -- # qos_result=14980 00:14:05.660 05:11:24 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:14:05.660 05:11:24 -- bdev/blockdev.sh@394 -- # lower_limit=13500 00:14:05.660 05:11:24 -- bdev/blockdev.sh@395 -- # upper_limit=16500 00:14:05.660 05:11:24 -- bdev/blockdev.sh@398 -- # '[' 14980 -lt 13500 ']' 00:14:05.660 05:11:24 -- bdev/blockdev.sh@398 -- # '[' 14980 -gt 16500 ']' 00:14:05.660 00:14:05.660 real 0m5.224s 00:14:05.660 user 0m0.123s 00:14:05.660 sys 0m0.041s 00:14:05.660 05:11:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.660 05:11:24 -- common/autotest_common.sh@10 -- # set +x 00:14:05.660 ************************************ 00:14:05.660 END TEST bdev_qos_iops 00:14:05.660 ************************************ 00:14:05.660 05:11:24 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:14:05.660 05:11:24 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:05.660 05:11:24 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:05.660 05:11:24 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:05.660 05:11:24 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:05.660 05:11:24 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:05.660 05:11:24 -- bdev/blockdev.sh@376 -- # tail -1 00:14:10.928 05:11:29 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 26062.52 104250.07 0.00 0.00 105472.00 0.00 0.00 ' 00:14:10.928 05:11:29 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:10.928 05:11:29 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:10.928 05:11:29 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:10.928 05:11:29 -- bdev/blockdev.sh@380 -- # iostat_result=105472.00 00:14:10.928 05:11:29 -- bdev/blockdev.sh@383 -- # echo 105472 00:14:10.928 05:11:29 -- bdev/blockdev.sh@425 -- # bw_limit=105472 00:14:10.928 05:11:29 -- bdev/blockdev.sh@426 -- # bw_limit=10 00:14:10.928 05:11:29 -- bdev/blockdev.sh@427 -- # '[' 10 -lt 2 ']' 00:14:10.928 05:11:29 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 10 Null_1 00:14:10.928 05:11:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.928 05:11:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.928 05:11:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.928 05:11:29 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 10 BANDWIDTH Null_1 00:14:10.928 05:11:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:10.928 05:11:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:10.928 05:11:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.928 ************************************ 00:14:10.928 START TEST bdev_qos_bw 00:14:10.928 ************************************ 00:14:10.928 05:11:29 -- common/autotest_common.sh@1104 -- # run_qos_test 10 BANDWIDTH Null_1 00:14:10.928 05:11:29 -- bdev/blockdev.sh@387 -- # local qos_limit=10 00:14:10.928 05:11:29 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:10.928 05:11:29 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:14:10.928 05:11:29 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:10.928 05:11:29 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:10.928 05:11:29 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:10.928 05:11:29 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:10.928 05:11:29 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:10.928 05:11:29 -- bdev/blockdev.sh@376 -- # tail -1 00:14:16.197 05:11:34 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2564.04 10256.15 0.00 0.00 10548.00 0.00 0.00 ' 00:14:16.197 05:11:34 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:16.197 05:11:34 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:16.197 05:11:34 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:16.197 05:11:34 -- bdev/blockdev.sh@380 -- # iostat_result=10548.00 00:14:16.197 05:11:34 -- bdev/blockdev.sh@383 -- # echo 10548 00:14:16.197 05:11:34 -- bdev/blockdev.sh@390 -- # qos_result=10548 00:14:16.197 05:11:34 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:16.197 05:11:34 -- bdev/blockdev.sh@392 -- # qos_limit=10240 00:14:16.197 05:11:34 -- bdev/blockdev.sh@394 -- # lower_limit=9216 00:14:16.197 05:11:34 -- bdev/blockdev.sh@395 -- # upper_limit=11264 00:14:16.197 05:11:34 -- bdev/blockdev.sh@398 -- # '[' 10548 -lt 9216 ']' 00:14:16.197 05:11:34 -- bdev/blockdev.sh@398 -- # '[' 10548 -gt 11264 ']' 00:14:16.197 00:14:16.197 real 0m5.277s 00:14:16.197 user 0m0.133s 00:14:16.197 sys 0m0.029s 00:14:16.197 05:11:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.197 05:11:34 -- common/autotest_common.sh@10 -- # set +x 00:14:16.197 ************************************ 00:14:16.197 END TEST bdev_qos_bw 00:14:16.197 ************************************ 00:14:16.197 05:11:35 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:16.197 05:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:16.197 05:11:35 -- common/autotest_common.sh@10 -- # set +x 00:14:16.197 05:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:16.197 05:11:35 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:16.197 05:11:35 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:16.197 05:11:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:16.197 05:11:35 -- common/autotest_common.sh@10 -- # set +x 00:14:16.197 ************************************ 00:14:16.197 START TEST bdev_qos_ro_bw 00:14:16.197 ************************************ 00:14:16.197 05:11:35 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:16.197 05:11:35 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:14:16.197 05:11:35 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:16.197 05:11:35 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:14:16.197 05:11:35 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:16.198 05:11:35 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:16.198 05:11:35 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:16.198 05:11:35 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:16.198 05:11:35 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:16.198 05:11:35 -- bdev/blockdev.sh@376 -- # tail -1 00:14:21.466 05:11:40 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.12 2044.49 0.00 0.00 2060.00 0.00 0.00 ' 00:14:21.466 05:11:40 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:21.466 05:11:40 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:21.466 05:11:40 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:21.466 05:11:40 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:14:21.466 05:11:40 -- bdev/blockdev.sh@383 -- # echo 2060 00:14:21.466 05:11:40 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:14:21.466 05:11:40 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:21.466 05:11:40 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:14:21.466 05:11:40 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:14:21.466 05:11:40 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:14:21.466 05:11:40 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:14:21.466 05:11:40 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:14:21.466 00:14:21.466 real 0m5.187s 00:14:21.466 user 0m0.127s 00:14:21.466 sys 0m0.036s 00:14:21.466 05:11:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.466 05:11:40 -- common/autotest_common.sh@10 -- # set +x 00:14:21.466 ************************************ 00:14:21.466 END TEST bdev_qos_ro_bw 00:14:21.466 ************************************ 00:14:21.466 05:11:40 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:21.466 05:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.466 05:11:40 -- common/autotest_common.sh@10 -- # set +x 00:14:22.033 05:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.033 05:11:40 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:14:22.033 05:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.033 05:11:40 -- common/autotest_common.sh@10 -- # set +x 00:14:22.033 00:14:22.033 Latency(us) 00:14:22.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.033 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:22.033 Malloc_0 : 26.71 20787.85 81.20 0.00 0.00 12201.79 2353.34 503316.48 00:14:22.033 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:22.033 Null_1 : 26.90 23077.81 90.15 0.00 0.00 11067.97 867.61 187790.43 00:14:22.033 =================================================================================================================== 00:14:22.033 Total : 43865.66 171.35 0.00 0.00 11603.30 867.61 503316.48 00:14:22.033 0 00:14:22.033 05:11:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.033 05:11:41 -- bdev/blockdev.sh@459 -- # killprocess 67511 00:14:22.033 05:11:41 -- common/autotest_common.sh@926 -- # '[' -z 67511 ']' 00:14:22.033 05:11:41 -- common/autotest_common.sh@930 -- # kill -0 67511 00:14:22.033 05:11:41 -- common/autotest_common.sh@931 -- # uname 00:14:22.033 05:11:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:22.033 05:11:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67511 00:14:22.033 05:11:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:22.033 05:11:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:22.033 killing process with pid 67511 00:14:22.033 05:11:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67511' 00:14:22.033 Received shutdown signal, test time was about 26.946857 seconds 00:14:22.033 00:14:22.033 Latency(us) 00:14:22.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.033 =================================================================================================================== 00:14:22.033 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.033 05:11:41 -- common/autotest_common.sh@945 -- # kill 67511 00:14:22.033 05:11:41 -- common/autotest_common.sh@950 -- # wait 67511 00:14:23.405 05:11:42 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:14:23.405 00:14:23.405 real 0m29.521s 00:14:23.405 user 0m30.361s 00:14:23.405 sys 0m0.684s 00:14:23.405 05:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.405 05:11:42 -- common/autotest_common.sh@10 -- # set +x 00:14:23.405 ************************************ 00:14:23.405 END TEST bdev_qos 00:14:23.405 ************************************ 00:14:23.405 05:11:42 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:23.405 05:11:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:23.405 05:11:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:23.405 05:11:42 -- common/autotest_common.sh@10 -- # set +x 00:14:23.405 ************************************ 00:14:23.405 START TEST bdev_qd_sampling 00:14:23.405 ************************************ 00:14:23.405 05:11:42 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:14:23.405 05:11:42 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:14:23.405 05:11:42 -- bdev/blockdev.sh@539 -- # QD_PID=67929 00:14:23.405 Process bdev QD sampling period testing pid: 67929 00:14:23.405 05:11:42 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 67929' 00:14:23.405 05:11:42 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:23.406 05:11:42 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:23.406 05:11:42 -- bdev/blockdev.sh@542 -- # waitforlisten 67929 00:14:23.406 05:11:42 -- common/autotest_common.sh@819 -- # '[' -z 67929 ']' 00:14:23.406 05:11:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.406 05:11:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:23.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.406 05:11:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.406 05:11:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:23.406 05:11:42 -- common/autotest_common.sh@10 -- # set +x 00:14:23.406 [2024-07-26 05:11:42.411456] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:23.406 [2024-07-26 05:11:42.411617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67929 ] 00:14:23.663 [2024-07-26 05:11:42.573517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:23.921 [2024-07-26 05:11:42.812572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.921 [2024-07-26 05:11:42.812578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.495 05:11:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:24.495 05:11:43 -- common/autotest_common.sh@852 -- # return 0 00:14:24.495 05:11:43 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:24.495 05:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.495 05:11:43 -- common/autotest_common.sh@10 -- # set +x 00:14:24.495 Malloc_QD 00:14:24.495 05:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.495 05:11:43 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:14:24.495 05:11:43 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:14:24.495 05:11:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:24.495 05:11:43 -- common/autotest_common.sh@889 -- # local i 00:14:24.495 05:11:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:24.495 05:11:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:24.495 05:11:43 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:24.495 05:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.495 05:11:43 -- common/autotest_common.sh@10 -- # set +x 00:14:24.495 05:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.495 05:11:43 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:24.495 05:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.495 05:11:43 -- common/autotest_common.sh@10 -- # set +x 00:14:24.495 [ 00:14:24.495 { 00:14:24.495 "name": "Malloc_QD", 00:14:24.495 "aliases": [ 00:14:24.495 "1c7d494b-48a0-485f-9c9e-9fde24753328" 00:14:24.495 ], 00:14:24.495 "product_name": "Malloc disk", 00:14:24.495 "block_size": 512, 00:14:24.495 "num_blocks": 262144, 00:14:24.495 "uuid": "1c7d494b-48a0-485f-9c9e-9fde24753328", 00:14:24.495 "assigned_rate_limits": { 00:14:24.495 "rw_ios_per_sec": 0, 00:14:24.495 "rw_mbytes_per_sec": 0, 00:14:24.495 "r_mbytes_per_sec": 0, 00:14:24.495 "w_mbytes_per_sec": 0 00:14:24.495 }, 00:14:24.495 "claimed": false, 00:14:24.495 "zoned": false, 00:14:24.495 "supported_io_types": { 00:14:24.495 "read": true, 00:14:24.495 "write": true, 00:14:24.495 "unmap": true, 00:14:24.495 "write_zeroes": true, 00:14:24.495 "flush": true, 00:14:24.495 "reset": true, 00:14:24.495 "compare": false, 00:14:24.495 "compare_and_write": false, 00:14:24.495 "abort": true, 00:14:24.495 "nvme_admin": false, 00:14:24.495 "nvme_io": false 00:14:24.495 }, 00:14:24.495 "memory_domains": [ 00:14:24.495 { 00:14:24.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.495 "dma_device_type": 2 00:14:24.495 } 00:14:24.495 ], 00:14:24.495 "driver_specific": {} 00:14:24.495 } 00:14:24.495 ] 00:14:24.495 05:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.495 05:11:43 -- common/autotest_common.sh@895 -- # return 0 00:14:24.495 05:11:43 -- bdev/blockdev.sh@548 -- # sleep 2 00:14:24.495 05:11:43 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:24.773 Running I/O for 5 seconds... 00:14:26.672 05:11:45 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:14:26.672 05:11:45 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:14:26.672 05:11:45 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:14:26.672 05:11:45 -- bdev/blockdev.sh@519 -- # local iostats 00:14:26.672 05:11:45 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:26.672 05:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.672 05:11:45 -- common/autotest_common.sh@10 -- # set +x 00:14:26.672 05:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.672 05:11:45 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:26.672 05:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.672 05:11:45 -- common/autotest_common.sh@10 -- # set +x 00:14:26.672 05:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.672 05:11:45 -- bdev/blockdev.sh@523 -- # iostats='{ 00:14:26.672 "tick_rate": 2200000000, 00:14:26.672 "ticks": 1733541898415, 00:14:26.672 "bdevs": [ 00:14:26.672 { 00:14:26.672 "name": "Malloc_QD", 00:14:26.672 "bytes_read": 840995328, 00:14:26.672 "num_read_ops": 205315, 00:14:26.672 "bytes_written": 0, 00:14:26.672 "num_write_ops": 0, 00:14:26.672 "bytes_unmapped": 0, 00:14:26.672 "num_unmap_ops": 0, 00:14:26.672 "bytes_copied": 0, 00:14:26.672 "num_copy_ops": 0, 00:14:26.672 "read_latency_ticks": 2138306593098, 00:14:26.672 "max_read_latency_ticks": 14693482, 00:14:26.672 "min_read_latency_ticks": 321468, 00:14:26.672 "write_latency_ticks": 0, 00:14:26.672 "max_write_latency_ticks": 0, 00:14:26.672 "min_write_latency_ticks": 0, 00:14:26.672 "unmap_latency_ticks": 0, 00:14:26.672 "max_unmap_latency_ticks": 0, 00:14:26.672 "min_unmap_latency_ticks": 0, 00:14:26.672 "copy_latency_ticks": 0, 00:14:26.672 "max_copy_latency_ticks": 0, 00:14:26.672 "min_copy_latency_ticks": 0, 00:14:26.672 "io_error": {}, 00:14:26.672 "queue_depth_polling_period": 10, 00:14:26.672 "queue_depth": 512, 00:14:26.672 "io_time": 30, 00:14:26.672 "weighted_io_time": 15360 00:14:26.672 } 00:14:26.672 ] 00:14:26.672 }' 00:14:26.672 05:11:45 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:26.672 05:11:45 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:14:26.672 05:11:45 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:14:26.672 05:11:45 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:14:26.672 05:11:45 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:26.672 05:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.672 05:11:45 -- common/autotest_common.sh@10 -- # set +x 00:14:26.672 00:14:26.672 Latency(us) 00:14:26.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.672 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:26.672 Malloc_QD : 1.95 53298.60 208.20 0.00 0.00 4790.95 1482.01 6702.55 00:14:26.672 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:26.672 Malloc_QD : 1.95 54600.60 213.28 0.00 0.00 4677.48 1176.67 5779.08 00:14:26.672 =================================================================================================================== 00:14:26.672 Total : 107899.20 421.48 0.00 0.00 4733.53 1176.67 6702.55 00:14:26.672 0 00:14:26.672 05:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.672 05:11:45 -- bdev/blockdev.sh@552 -- # killprocess 67929 00:14:26.672 05:11:45 -- common/autotest_common.sh@926 -- # '[' -z 67929 ']' 00:14:26.672 05:11:45 -- common/autotest_common.sh@930 -- # kill -0 67929 00:14:26.672 05:11:45 -- common/autotest_common.sh@931 -- # uname 00:14:26.672 05:11:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:26.672 05:11:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67929 00:14:26.672 killing process with pid 67929 00:14:26.672 Received shutdown signal, test time was about 2.084680 seconds 00:14:26.672 00:14:26.672 Latency(us) 00:14:26.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.672 =================================================================================================================== 00:14:26.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.672 05:11:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:26.672 05:11:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:26.672 05:11:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67929' 00:14:26.672 05:11:45 -- common/autotest_common.sh@945 -- # kill 67929 00:14:26.672 05:11:45 -- common/autotest_common.sh@950 -- # wait 67929 00:14:28.045 05:11:46 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:14:28.045 00:14:28.045 real 0m4.639s 00:14:28.045 user 0m8.661s 00:14:28.045 sys 0m0.381s 00:14:28.045 05:11:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.045 05:11:46 -- common/autotest_common.sh@10 -- # set +x 00:14:28.045 ************************************ 00:14:28.045 END TEST bdev_qd_sampling 00:14:28.045 ************************************ 00:14:28.045 05:11:47 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:14:28.045 05:11:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:28.045 05:11:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:28.045 05:11:47 -- common/autotest_common.sh@10 -- # set +x 00:14:28.045 ************************************ 00:14:28.045 START TEST bdev_error 00:14:28.045 ************************************ 00:14:28.046 05:11:47 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:14:28.046 05:11:47 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:14:28.046 05:11:47 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:14:28.046 05:11:47 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:14:28.046 05:11:47 -- bdev/blockdev.sh@470 -- # ERR_PID=68006 00:14:28.046 05:11:47 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 68006' 00:14:28.046 Process error testing pid: 68006 00:14:28.046 05:11:47 -- bdev/blockdev.sh@472 -- # waitforlisten 68006 00:14:28.046 05:11:47 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:28.046 05:11:47 -- common/autotest_common.sh@819 -- # '[' -z 68006 ']' 00:14:28.046 05:11:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.046 05:11:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:28.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.046 05:11:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.046 05:11:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:28.046 05:11:47 -- common/autotest_common.sh@10 -- # set +x 00:14:28.046 [2024-07-26 05:11:47.112210] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:28.046 [2024-07-26 05:11:47.112399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68006 ] 00:14:28.303 [2024-07-26 05:11:47.276182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.561 [2024-07-26 05:11:47.456069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.127 05:11:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:29.127 05:11:48 -- common/autotest_common.sh@852 -- # return 0 00:14:29.127 05:11:48 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:29.127 05:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.127 05:11:48 -- common/autotest_common.sh@10 -- # set +x 00:14:29.127 Dev_1 00:14:29.127 05:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.127 05:11:48 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:14:29.127 05:11:48 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:29.127 05:11:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:29.127 05:11:48 -- common/autotest_common.sh@889 -- # local i 00:14:29.127 05:11:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:29.127 05:11:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:29.127 05:11:48 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:29.127 05:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.127 05:11:48 -- common/autotest_common.sh@10 -- # set +x 00:14:29.127 05:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.127 05:11:48 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:29.127 05:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.127 05:11:48 -- common/autotest_common.sh@10 -- # set +x 00:14:29.127 [ 00:14:29.127 { 00:14:29.127 "name": "Dev_1", 00:14:29.127 "aliases": [ 00:14:29.127 "b74e793b-6392-4695-94ff-aafaec30a8cb" 00:14:29.127 ], 00:14:29.127 "product_name": "Malloc disk", 00:14:29.127 "block_size": 512, 00:14:29.127 "num_blocks": 262144, 00:14:29.127 "uuid": "b74e793b-6392-4695-94ff-aafaec30a8cb", 00:14:29.127 "assigned_rate_limits": { 00:14:29.127 "rw_ios_per_sec": 0, 00:14:29.127 "rw_mbytes_per_sec": 0, 00:14:29.127 "r_mbytes_per_sec": 0, 00:14:29.127 "w_mbytes_per_sec": 0 00:14:29.127 }, 00:14:29.127 "claimed": false, 00:14:29.127 "zoned": false, 00:14:29.127 "supported_io_types": { 00:14:29.127 "read": true, 00:14:29.127 "write": true, 00:14:29.127 "unmap": true, 00:14:29.127 "write_zeroes": true, 00:14:29.127 "flush": true, 00:14:29.127 "reset": true, 00:14:29.127 "compare": false, 00:14:29.127 "compare_and_write": false, 00:14:29.127 "abort": true, 00:14:29.127 "nvme_admin": false, 00:14:29.127 "nvme_io": false 00:14:29.127 }, 00:14:29.128 "memory_domains": [ 00:14:29.128 { 00:14:29.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.128 "dma_device_type": 2 00:14:29.128 } 00:14:29.128 ], 00:14:29.128 "driver_specific": {} 00:14:29.128 } 00:14:29.128 ] 00:14:29.128 05:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.128 05:11:48 -- common/autotest_common.sh@895 -- # return 0 00:14:29.128 05:11:48 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:14:29.128 05:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.128 05:11:48 -- common/autotest_common.sh@10 -- # set +x 00:14:29.128 true 00:14:29.128 05:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.128 05:11:48 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:29.128 05:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.128 05:11:48 -- common/autotest_common.sh@10 -- # set +x 00:14:29.386 Dev_2 00:14:29.386 05:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.386 05:11:48 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:14:29.386 05:11:48 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:29.386 05:11:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:29.386 05:11:48 -- common/autotest_common.sh@889 -- # local i 00:14:29.386 05:11:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:29.386 05:11:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:29.386 05:11:48 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:29.386 05:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.386 05:11:48 -- common/autotest_common.sh@10 -- # set +x 00:14:29.386 05:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.386 05:11:48 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:29.386 05:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.386 05:11:48 -- common/autotest_common.sh@10 -- # set +x 00:14:29.386 [ 00:14:29.386 { 00:14:29.386 "name": "Dev_2", 00:14:29.386 "aliases": [ 00:14:29.386 "92de754f-f631-4b81-b830-91e9f7cf7b7f" 00:14:29.386 ], 00:14:29.386 "product_name": "Malloc disk", 00:14:29.386 "block_size": 512, 00:14:29.386 "num_blocks": 262144, 00:14:29.386 "uuid": "92de754f-f631-4b81-b830-91e9f7cf7b7f", 00:14:29.386 "assigned_rate_limits": { 00:14:29.386 "rw_ios_per_sec": 0, 00:14:29.386 "rw_mbytes_per_sec": 0, 00:14:29.386 "r_mbytes_per_sec": 0, 00:14:29.386 "w_mbytes_per_sec": 0 00:14:29.386 }, 00:14:29.386 "claimed": false, 00:14:29.386 "zoned": false, 00:14:29.386 "supported_io_types": { 00:14:29.386 "read": true, 00:14:29.386 "write": true, 00:14:29.386 "unmap": true, 00:14:29.386 "write_zeroes": true, 00:14:29.386 "flush": true, 00:14:29.386 "reset": true, 00:14:29.386 "compare": false, 00:14:29.386 "compare_and_write": false, 00:14:29.386 "abort": true, 00:14:29.386 "nvme_admin": false, 00:14:29.386 "nvme_io": false 00:14:29.386 }, 00:14:29.386 "memory_domains": [ 00:14:29.386 { 00:14:29.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.386 "dma_device_type": 2 00:14:29.386 } 00:14:29.386 ], 00:14:29.386 "driver_specific": {} 00:14:29.386 } 00:14:29.386 ] 00:14:29.386 05:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.386 05:11:48 -- common/autotest_common.sh@895 -- # return 0 00:14:29.386 05:11:48 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:29.386 05:11:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.386 05:11:48 -- common/autotest_common.sh@10 -- # set +x 00:14:29.386 05:11:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.386 05:11:48 -- bdev/blockdev.sh@482 -- # sleep 1 00:14:29.386 05:11:48 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:29.644 Running I/O for 5 seconds... 00:14:30.577 05:11:49 -- bdev/blockdev.sh@485 -- # kill -0 68006 00:14:30.577 Process is existed as continue on error is set. Pid: 68006 00:14:30.577 05:11:49 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 68006' 00:14:30.577 05:11:49 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:30.577 05:11:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.577 05:11:49 -- common/autotest_common.sh@10 -- # set +x 00:14:30.577 05:11:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.577 05:11:49 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:30.577 05:11:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.577 05:11:49 -- common/autotest_common.sh@10 -- # set +x 00:14:30.577 Timeout while waiting for response: 00:14:30.577 00:14:30.577 00:14:30.577 05:11:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.577 05:11:49 -- bdev/blockdev.sh@495 -- # sleep 5 00:14:34.758 00:14:34.758 Latency(us) 00:14:34.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.758 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:34.758 EE_Dev_1 : 0.88 38007.65 148.47 5.65 0.00 417.80 145.22 882.50 00:14:34.758 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:34.758 Dev_2 : 5.00 78598.01 307.02 0.00 0.00 200.51 74.47 287881.77 00:14:34.758 =================================================================================================================== 00:14:34.758 Total : 116605.66 455.49 5.65 0.00 217.64 74.47 287881.77 00:14:35.692 05:11:54 -- bdev/blockdev.sh@497 -- # killprocess 68006 00:14:35.692 05:11:54 -- common/autotest_common.sh@926 -- # '[' -z 68006 ']' 00:14:35.692 05:11:54 -- common/autotest_common.sh@930 -- # kill -0 68006 00:14:35.692 05:11:54 -- common/autotest_common.sh@931 -- # uname 00:14:35.692 05:11:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:35.692 05:11:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68006 00:14:35.692 05:11:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:35.692 05:11:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:35.692 killing process with pid 68006 00:14:35.692 05:11:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68006' 00:14:35.692 Received shutdown signal, test time was about 5.000000 seconds 00:14:35.692 00:14:35.692 Latency(us) 00:14:35.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.692 =================================================================================================================== 00:14:35.692 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.692 05:11:54 -- common/autotest_common.sh@945 -- # kill 68006 00:14:35.692 05:11:54 -- common/autotest_common.sh@950 -- # wait 68006 00:14:37.067 05:11:56 -- bdev/blockdev.sh@501 -- # ERR_PID=68116 00:14:37.067 05:11:56 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:37.067 Process error testing pid: 68116 00:14:37.067 05:11:56 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 68116' 00:14:37.067 05:11:56 -- bdev/blockdev.sh@503 -- # waitforlisten 68116 00:14:37.067 05:11:56 -- common/autotest_common.sh@819 -- # '[' -z 68116 ']' 00:14:37.067 05:11:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.067 05:11:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:37.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.067 05:11:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.067 05:11:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:37.067 05:11:56 -- common/autotest_common.sh@10 -- # set +x 00:14:37.067 [2024-07-26 05:11:56.096834] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:37.067 [2024-07-26 05:11:56.097067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68116 ] 00:14:37.326 [2024-07-26 05:11:56.266678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.584 [2024-07-26 05:11:56.446749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.151 05:11:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:38.151 05:11:57 -- common/autotest_common.sh@852 -- # return 0 00:14:38.151 05:11:57 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:38.151 05:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.151 05:11:57 -- common/autotest_common.sh@10 -- # set +x 00:14:38.151 Dev_1 00:14:38.151 05:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.151 05:11:57 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:14:38.151 05:11:57 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:38.151 05:11:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:38.151 05:11:57 -- common/autotest_common.sh@889 -- # local i 00:14:38.151 05:11:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:38.151 05:11:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:38.151 05:11:57 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:38.151 05:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.151 05:11:57 -- common/autotest_common.sh@10 -- # set +x 00:14:38.151 05:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.151 05:11:57 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:38.151 05:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.151 05:11:57 -- common/autotest_common.sh@10 -- # set +x 00:14:38.151 [ 00:14:38.151 { 00:14:38.151 "name": "Dev_1", 00:14:38.151 "aliases": [ 00:14:38.151 "17bbf989-0036-49d9-81c5-7376d13da360" 00:14:38.151 ], 00:14:38.151 "product_name": "Malloc disk", 00:14:38.151 "block_size": 512, 00:14:38.151 "num_blocks": 262144, 00:14:38.151 "uuid": "17bbf989-0036-49d9-81c5-7376d13da360", 00:14:38.151 "assigned_rate_limits": { 00:14:38.151 "rw_ios_per_sec": 0, 00:14:38.151 "rw_mbytes_per_sec": 0, 00:14:38.151 "r_mbytes_per_sec": 0, 00:14:38.151 "w_mbytes_per_sec": 0 00:14:38.151 }, 00:14:38.151 "claimed": false, 00:14:38.151 "zoned": false, 00:14:38.151 "supported_io_types": { 00:14:38.151 "read": true, 00:14:38.151 "write": true, 00:14:38.151 "unmap": true, 00:14:38.151 "write_zeroes": true, 00:14:38.151 "flush": true, 00:14:38.151 "reset": true, 00:14:38.151 "compare": false, 00:14:38.151 "compare_and_write": false, 00:14:38.151 "abort": true, 00:14:38.151 "nvme_admin": false, 00:14:38.151 "nvme_io": false 00:14:38.151 }, 00:14:38.151 "memory_domains": [ 00:14:38.151 { 00:14:38.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.151 "dma_device_type": 2 00:14:38.151 } 00:14:38.151 ], 00:14:38.151 "driver_specific": {} 00:14:38.151 } 00:14:38.151 ] 00:14:38.151 05:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.151 05:11:57 -- common/autotest_common.sh@895 -- # return 0 00:14:38.151 05:11:57 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:14:38.151 05:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.151 05:11:57 -- common/autotest_common.sh@10 -- # set +x 00:14:38.151 true 00:14:38.151 05:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.151 05:11:57 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:38.151 05:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.151 05:11:57 -- common/autotest_common.sh@10 -- # set +x 00:14:38.410 Dev_2 00:14:38.410 05:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.410 05:11:57 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:14:38.410 05:11:57 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:38.410 05:11:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:38.410 05:11:57 -- common/autotest_common.sh@889 -- # local i 00:14:38.410 05:11:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:38.410 05:11:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:38.410 05:11:57 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:38.410 05:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.411 05:11:57 -- common/autotest_common.sh@10 -- # set +x 00:14:38.411 05:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.411 05:11:57 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:38.411 05:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.411 05:11:57 -- common/autotest_common.sh@10 -- # set +x 00:14:38.411 [ 00:14:38.411 { 00:14:38.411 "name": "Dev_2", 00:14:38.411 "aliases": [ 00:14:38.411 "32498064-7e32-4ba4-9bc9-ec5cefe6938e" 00:14:38.411 ], 00:14:38.411 "product_name": "Malloc disk", 00:14:38.411 "block_size": 512, 00:14:38.411 "num_blocks": 262144, 00:14:38.411 "uuid": "32498064-7e32-4ba4-9bc9-ec5cefe6938e", 00:14:38.411 "assigned_rate_limits": { 00:14:38.411 "rw_ios_per_sec": 0, 00:14:38.411 "rw_mbytes_per_sec": 0, 00:14:38.411 "r_mbytes_per_sec": 0, 00:14:38.411 "w_mbytes_per_sec": 0 00:14:38.411 }, 00:14:38.411 "claimed": false, 00:14:38.411 "zoned": false, 00:14:38.411 "supported_io_types": { 00:14:38.411 "read": true, 00:14:38.411 "write": true, 00:14:38.411 "unmap": true, 00:14:38.411 "write_zeroes": true, 00:14:38.411 "flush": true, 00:14:38.411 "reset": true, 00:14:38.411 "compare": false, 00:14:38.411 "compare_and_write": false, 00:14:38.411 "abort": true, 00:14:38.411 "nvme_admin": false, 00:14:38.411 "nvme_io": false 00:14:38.411 }, 00:14:38.411 "memory_domains": [ 00:14:38.411 { 00:14:38.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.411 "dma_device_type": 2 00:14:38.411 } 00:14:38.411 ], 00:14:38.411 "driver_specific": {} 00:14:38.411 } 00:14:38.411 ] 00:14:38.411 05:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.411 05:11:57 -- common/autotest_common.sh@895 -- # return 0 00:14:38.411 05:11:57 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:38.411 05:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.411 05:11:57 -- common/autotest_common.sh@10 -- # set +x 00:14:38.411 05:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.411 05:11:57 -- bdev/blockdev.sh@513 -- # NOT wait 68116 00:14:38.411 05:11:57 -- common/autotest_common.sh@640 -- # local es=0 00:14:38.411 05:11:57 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 68116 00:14:38.411 05:11:57 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:38.411 05:11:57 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:38.411 05:11:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:38.411 05:11:57 -- common/autotest_common.sh@632 -- # type -t wait 00:14:38.411 05:11:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:38.411 05:11:57 -- common/autotest_common.sh@643 -- # wait 68116 00:14:38.411 Running I/O for 5 seconds... 00:14:38.411 task offset: 54440 on job bdev=EE_Dev_1 fails 00:14:38.411 00:14:38.411 Latency(us) 00:14:38.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.411 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:38.411 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:38.411 EE_Dev_1 : 0.00 25345.62 99.01 5760.37 0.00 419.86 164.77 752.17 00:14:38.411 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:38.411 Dev_2 : 0.00 17259.98 67.42 0.00 0.00 663.21 159.19 1213.91 00:14:38.411 =================================================================================================================== 00:14:38.411 Total : 42605.60 166.43 5760.37 0.00 551.85 159.19 1213.91 00:14:38.411 [2024-07-26 05:11:57.464222] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:38.411 request: 00:14:38.411 { 00:14:38.411 "method": "perform_tests", 00:14:38.411 "req_id": 1 00:14:38.411 } 00:14:38.411 Got JSON-RPC error response 00:14:38.411 response: 00:14:38.411 { 00:14:38.411 "code": -32603, 00:14:38.411 "message": "bdevperf failed with error Operation not permitted" 00:14:38.411 } 00:14:40.313 05:11:59 -- common/autotest_common.sh@643 -- # es=255 00:14:40.313 05:11:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:40.313 05:11:59 -- common/autotest_common.sh@652 -- # es=127 00:14:40.313 05:11:59 -- common/autotest_common.sh@653 -- # case "$es" in 00:14:40.313 05:11:59 -- common/autotest_common.sh@660 -- # es=1 00:14:40.313 05:11:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:40.313 00:14:40.313 real 0m12.032s 00:14:40.313 user 0m12.402s 00:14:40.313 sys 0m0.758s 00:14:40.313 05:11:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.313 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:14:40.313 ************************************ 00:14:40.313 END TEST bdev_error 00:14:40.313 ************************************ 00:14:40.313 05:11:59 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:14:40.313 05:11:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:40.313 05:11:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:40.313 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:14:40.313 ************************************ 00:14:40.313 START TEST bdev_stat 00:14:40.313 ************************************ 00:14:40.313 05:11:59 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:14:40.313 05:11:59 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:14:40.313 05:11:59 -- bdev/blockdev.sh@594 -- # STAT_PID=68174 00:14:40.313 Process Bdev IO statistics testing pid: 68174 00:14:40.313 05:11:59 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 68174' 00:14:40.313 05:11:59 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:40.313 05:11:59 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:40.313 05:11:59 -- bdev/blockdev.sh@597 -- # waitforlisten 68174 00:14:40.313 05:11:59 -- common/autotest_common.sh@819 -- # '[' -z 68174 ']' 00:14:40.313 05:11:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.313 05:11:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:40.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.313 05:11:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.313 05:11:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:40.313 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:14:40.313 [2024-07-26 05:11:59.200965] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:40.313 [2024-07-26 05:11:59.201150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68174 ] 00:14:40.313 [2024-07-26 05:11:59.373116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:40.571 [2024-07-26 05:11:59.583265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.571 [2024-07-26 05:11:59.583283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.138 05:12:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:41.138 05:12:00 -- common/autotest_common.sh@852 -- # return 0 00:14:41.138 05:12:00 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:41.138 05:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.138 05:12:00 -- common/autotest_common.sh@10 -- # set +x 00:14:41.397 Malloc_STAT 00:14:41.397 05:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.397 05:12:00 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:14:41.397 05:12:00 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:14:41.397 05:12:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:41.397 05:12:00 -- common/autotest_common.sh@889 -- # local i 00:14:41.397 05:12:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:41.398 05:12:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:41.398 05:12:00 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:41.398 05:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.398 05:12:00 -- common/autotest_common.sh@10 -- # set +x 00:14:41.398 05:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.398 05:12:00 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:41.398 05:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.398 05:12:00 -- common/autotest_common.sh@10 -- # set +x 00:14:41.398 [ 00:14:41.398 { 00:14:41.398 "name": "Malloc_STAT", 00:14:41.398 "aliases": [ 00:14:41.398 "7660e994-7136-4943-be23-72dd9813e2fa" 00:14:41.398 ], 00:14:41.398 "product_name": "Malloc disk", 00:14:41.398 "block_size": 512, 00:14:41.398 "num_blocks": 262144, 00:14:41.398 "uuid": "7660e994-7136-4943-be23-72dd9813e2fa", 00:14:41.398 "assigned_rate_limits": { 00:14:41.398 "rw_ios_per_sec": 0, 00:14:41.398 "rw_mbytes_per_sec": 0, 00:14:41.398 "r_mbytes_per_sec": 0, 00:14:41.398 "w_mbytes_per_sec": 0 00:14:41.398 }, 00:14:41.398 "claimed": false, 00:14:41.398 "zoned": false, 00:14:41.398 "supported_io_types": { 00:14:41.398 "read": true, 00:14:41.398 "write": true, 00:14:41.398 "unmap": true, 00:14:41.398 "write_zeroes": true, 00:14:41.398 "flush": true, 00:14:41.398 "reset": true, 00:14:41.398 "compare": false, 00:14:41.398 "compare_and_write": false, 00:14:41.398 "abort": true, 00:14:41.398 "nvme_admin": false, 00:14:41.398 "nvme_io": false 00:14:41.398 }, 00:14:41.398 "memory_domains": [ 00:14:41.398 { 00:14:41.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.398 "dma_device_type": 2 00:14:41.398 } 00:14:41.398 ], 00:14:41.398 "driver_specific": {} 00:14:41.398 } 00:14:41.398 ] 00:14:41.398 05:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.398 05:12:00 -- common/autotest_common.sh@895 -- # return 0 00:14:41.398 05:12:00 -- bdev/blockdev.sh@603 -- # sleep 2 00:14:41.398 05:12:00 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:41.398 Running I/O for 10 seconds... 00:14:43.300 05:12:02 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:14:43.300 05:12:02 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:14:43.300 05:12:02 -- bdev/blockdev.sh@558 -- # local iostats 00:14:43.300 05:12:02 -- bdev/blockdev.sh@559 -- # local io_count1 00:14:43.300 05:12:02 -- bdev/blockdev.sh@560 -- # local io_count2 00:14:43.300 05:12:02 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:14:43.300 05:12:02 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:14:43.300 05:12:02 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:14:43.300 05:12:02 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:14:43.300 05:12:02 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:43.300 05:12:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.300 05:12:02 -- common/autotest_common.sh@10 -- # set +x 00:14:43.300 05:12:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.300 05:12:02 -- bdev/blockdev.sh@566 -- # iostats='{ 00:14:43.300 "tick_rate": 2200000000, 00:14:43.300 "ticks": 1770486732522, 00:14:43.300 "bdevs": [ 00:14:43.300 { 00:14:43.300 "name": "Malloc_STAT", 00:14:43.300 "bytes_read": 825266688, 00:14:43.300 "num_read_ops": 201475, 00:14:43.300 "bytes_written": 0, 00:14:43.300 "num_write_ops": 0, 00:14:43.300 "bytes_unmapped": 0, 00:14:43.300 "num_unmap_ops": 0, 00:14:43.300 "bytes_copied": 0, 00:14:43.300 "num_copy_ops": 0, 00:14:43.300 "read_latency_ticks": 2122947895516, 00:14:43.300 "max_read_latency_ticks": 11855076, 00:14:43.300 "min_read_latency_ticks": 325702, 00:14:43.300 "write_latency_ticks": 0, 00:14:43.300 "max_write_latency_ticks": 0, 00:14:43.300 "min_write_latency_ticks": 0, 00:14:43.300 "unmap_latency_ticks": 0, 00:14:43.300 "max_unmap_latency_ticks": 0, 00:14:43.300 "min_unmap_latency_ticks": 0, 00:14:43.300 "copy_latency_ticks": 0, 00:14:43.300 "max_copy_latency_ticks": 0, 00:14:43.300 "min_copy_latency_ticks": 0, 00:14:43.300 "io_error": {} 00:14:43.300 } 00:14:43.300 ] 00:14:43.300 }' 00:14:43.300 05:12:02 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:14:43.300 05:12:02 -- bdev/blockdev.sh@567 -- # io_count1=201475 00:14:43.300 05:12:02 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:43.300 05:12:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.300 05:12:02 -- common/autotest_common.sh@10 -- # set +x 00:14:43.300 05:12:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.300 05:12:02 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:14:43.300 "tick_rate": 2200000000, 00:14:43.300 "ticks": 1770551578548, 00:14:43.300 "name": "Malloc_STAT", 00:14:43.300 "channels": [ 00:14:43.300 { 00:14:43.300 "thread_id": 2, 00:14:43.300 "bytes_read": 417333248, 00:14:43.300 "num_read_ops": 101888, 00:14:43.300 "bytes_written": 0, 00:14:43.300 "num_write_ops": 0, 00:14:43.300 "bytes_unmapped": 0, 00:14:43.300 "num_unmap_ops": 0, 00:14:43.300 "bytes_copied": 0, 00:14:43.300 "num_copy_ops": 0, 00:14:43.300 "read_latency_ticks": 1077807632114, 00:14:43.300 "max_read_latency_ticks": 11855076, 00:14:43.300 "min_read_latency_ticks": 7961174, 00:14:43.300 "write_latency_ticks": 0, 00:14:43.300 "max_write_latency_ticks": 0, 00:14:43.300 "min_write_latency_ticks": 0, 00:14:43.300 "unmap_latency_ticks": 0, 00:14:43.300 "max_unmap_latency_ticks": 0, 00:14:43.300 "min_unmap_latency_ticks": 0, 00:14:43.300 "copy_latency_ticks": 0, 00:14:43.300 "max_copy_latency_ticks": 0, 00:14:43.300 "min_copy_latency_ticks": 0 00:14:43.300 }, 00:14:43.300 { 00:14:43.300 "thread_id": 3, 00:14:43.300 "bytes_read": 421527552, 00:14:43.300 "num_read_ops": 102912, 00:14:43.300 "bytes_written": 0, 00:14:43.300 "num_write_ops": 0, 00:14:43.300 "bytes_unmapped": 0, 00:14:43.300 "num_unmap_ops": 0, 00:14:43.300 "bytes_copied": 0, 00:14:43.300 "num_copy_ops": 0, 00:14:43.300 "read_latency_ticks": 1080573197485, 00:14:43.300 "max_read_latency_ticks": 11724489, 00:14:43.300 "min_read_latency_ticks": 7963291, 00:14:43.300 "write_latency_ticks": 0, 00:14:43.300 "max_write_latency_ticks": 0, 00:14:43.300 "min_write_latency_ticks": 0, 00:14:43.300 "unmap_latency_ticks": 0, 00:14:43.300 "max_unmap_latency_ticks": 0, 00:14:43.300 "min_unmap_latency_ticks": 0, 00:14:43.300 "copy_latency_ticks": 0, 00:14:43.300 "max_copy_latency_ticks": 0, 00:14:43.300 "min_copy_latency_ticks": 0 00:14:43.300 } 00:14:43.300 ] 00:14:43.301 }' 00:14:43.301 05:12:02 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:14:43.301 05:12:02 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=101888 00:14:43.301 05:12:02 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=101888 00:14:43.558 05:12:02 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:14:43.558 05:12:02 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=102912 00:14:43.558 05:12:02 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=204800 00:14:43.558 05:12:02 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:43.558 05:12:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.558 05:12:02 -- common/autotest_common.sh@10 -- # set +x 00:14:43.558 05:12:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.558 05:12:02 -- bdev/blockdev.sh@575 -- # iostats='{ 00:14:43.558 "tick_rate": 2200000000, 00:14:43.558 "ticks": 1770649711787, 00:14:43.558 "bdevs": [ 00:14:43.558 { 00:14:43.558 "name": "Malloc_STAT", 00:14:43.558 "bytes_read": 857772544, 00:14:43.558 "num_read_ops": 209411, 00:14:43.558 "bytes_written": 0, 00:14:43.558 "num_write_ops": 0, 00:14:43.558 "bytes_unmapped": 0, 00:14:43.558 "num_unmap_ops": 0, 00:14:43.558 "bytes_copied": 0, 00:14:43.558 "num_copy_ops": 0, 00:14:43.558 "read_latency_ticks": 2208207868186, 00:14:43.558 "max_read_latency_ticks": 11904485, 00:14:43.558 "min_read_latency_ticks": 325702, 00:14:43.558 "write_latency_ticks": 0, 00:14:43.558 "max_write_latency_ticks": 0, 00:14:43.558 "min_write_latency_ticks": 0, 00:14:43.558 "unmap_latency_ticks": 0, 00:14:43.558 "max_unmap_latency_ticks": 0, 00:14:43.558 "min_unmap_latency_ticks": 0, 00:14:43.558 "copy_latency_ticks": 0, 00:14:43.558 "max_copy_latency_ticks": 0, 00:14:43.558 "min_copy_latency_ticks": 0, 00:14:43.558 "io_error": {} 00:14:43.558 } 00:14:43.558 ] 00:14:43.558 }' 00:14:43.558 05:12:02 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:14:43.558 05:12:02 -- bdev/blockdev.sh@576 -- # io_count2=209411 00:14:43.558 05:12:02 -- bdev/blockdev.sh@581 -- # '[' 204800 -lt 201475 ']' 00:14:43.558 05:12:02 -- bdev/blockdev.sh@581 -- # '[' 204800 -gt 209411 ']' 00:14:43.558 05:12:02 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:43.558 05:12:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.558 05:12:02 -- common/autotest_common.sh@10 -- # set +x 00:14:43.558 00:14:43.558 Latency(us) 00:14:43.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.558 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:43.558 Malloc_STAT : 2.00 53102.74 207.43 0.00 0.00 4809.11 1236.25 5421.61 00:14:43.558 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:43.558 Malloc_STAT : 2.00 53558.33 209.21 0.00 0.00 4768.50 968.15 5332.25 00:14:43.558 =================================================================================================================== 00:14:43.558 Total : 106661.07 416.64 0.00 0.00 4788.71 968.15 5421.61 00:14:43.558 0 00:14:43.558 05:12:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.558 05:12:02 -- bdev/blockdev.sh@607 -- # killprocess 68174 00:14:43.558 05:12:02 -- common/autotest_common.sh@926 -- # '[' -z 68174 ']' 00:14:43.558 05:12:02 -- common/autotest_common.sh@930 -- # kill -0 68174 00:14:43.558 05:12:02 -- common/autotest_common.sh@931 -- # uname 00:14:43.558 05:12:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:43.558 05:12:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68174 00:14:43.558 killing process with pid 68174 00:14:43.558 Received shutdown signal, test time was about 2.140136 seconds 00:14:43.558 00:14:43.558 Latency(us) 00:14:43.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.558 =================================================================================================================== 00:14:43.558 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.558 05:12:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:43.558 05:12:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:43.558 05:12:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68174' 00:14:43.558 05:12:02 -- common/autotest_common.sh@945 -- # kill 68174 00:14:43.558 05:12:02 -- common/autotest_common.sh@950 -- # wait 68174 00:14:44.932 ************************************ 00:14:44.932 END TEST bdev_stat 00:14:44.932 ************************************ 00:14:44.932 05:12:03 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:14:44.932 00:14:44.932 real 0m4.754s 00:14:44.932 user 0m8.871s 00:14:44.932 sys 0m0.384s 00:14:44.932 05:12:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.932 05:12:03 -- common/autotest_common.sh@10 -- # set +x 00:14:44.932 05:12:03 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:14:44.932 05:12:03 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:14:44.932 05:12:03 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:44.932 05:12:03 -- bdev/blockdev.sh@809 -- # cleanup 00:14:44.932 05:12:03 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:44.932 05:12:03 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:44.932 05:12:03 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:14:44.932 05:12:03 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:14:44.932 05:12:03 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:14:44.932 05:12:03 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:14:44.932 00:14:44.932 real 2m22.232s 00:14:44.932 user 5m52.571s 00:14:44.932 sys 0m21.612s 00:14:44.932 ************************************ 00:14:44.932 END TEST blockdev_general 00:14:44.932 ************************************ 00:14:44.932 05:12:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.932 05:12:03 -- common/autotest_common.sh@10 -- # set +x 00:14:44.932 05:12:03 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:44.932 05:12:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:44.932 05:12:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.932 05:12:03 -- common/autotest_common.sh@10 -- # set +x 00:14:44.932 ************************************ 00:14:44.932 START TEST bdev_raid 00:14:44.932 ************************************ 00:14:44.932 05:12:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:45.191 * Looking for test storage... 00:14:45.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:45.191 05:12:04 -- bdev/nbd_common.sh@6 -- # set -e 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@716 -- # uname -s 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:45.191 05:12:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:45.191 05:12:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:45.191 05:12:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.191 ************************************ 00:14:45.191 START TEST raid_function_test_raid0 00:14:45.191 ************************************ 00:14:45.191 05:12:04 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@86 -- # raid_pid=68311 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:45.191 Process raid pid: 68311 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 68311' 00:14:45.191 05:12:04 -- bdev/bdev_raid.sh@88 -- # waitforlisten 68311 /var/tmp/spdk-raid.sock 00:14:45.191 05:12:04 -- common/autotest_common.sh@819 -- # '[' -z 68311 ']' 00:14:45.191 05:12:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:45.191 05:12:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:45.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:45.191 05:12:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:45.191 05:12:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:45.191 05:12:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.191 [2024-07-26 05:12:04.156221] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:45.192 [2024-07-26 05:12:04.156407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.450 [2024-07-26 05:12:04.328707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.450 [2024-07-26 05:12:04.500698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.722 [2024-07-26 05:12:04.667575] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.000 05:12:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:46.000 05:12:05 -- common/autotest_common.sh@852 -- # return 0 00:14:46.000 05:12:05 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:14:46.000 05:12:05 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:14:46.000 05:12:05 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:46.000 05:12:05 -- bdev/bdev_raid.sh@70 -- # cat 00:14:46.000 05:12:05 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:46.259 [2024-07-26 05:12:05.331490] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:46.259 [2024-07-26 05:12:05.333676] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:46.259 [2024-07-26 05:12:05.333775] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:14:46.259 [2024-07-26 05:12:05.333796] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:46.259 [2024-07-26 05:12:05.333964] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:46.259 [2024-07-26 05:12:05.334398] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:14:46.259 [2024-07-26 05:12:05.334417] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006f80 00:14:46.259 [2024-07-26 05:12:05.334593] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.259 Base_1 00:14:46.259 Base_2 00:14:46.259 05:12:05 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:46.259 05:12:05 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:46.259 05:12:05 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:46.517 05:12:05 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:46.517 05:12:05 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:46.517 05:12:05 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:46.517 05:12:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:46.517 05:12:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:46.517 05:12:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:46.517 05:12:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:46.517 05:12:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:46.517 05:12:05 -- bdev/nbd_common.sh@12 -- # local i 00:14:46.517 05:12:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:46.517 05:12:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.517 05:12:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:46.776 [2024-07-26 05:12:05.843621] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:14:46.776 /dev/nbd0 00:14:46.776 05:12:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:46.776 05:12:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:46.776 05:12:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:46.776 05:12:05 -- common/autotest_common.sh@857 -- # local i 00:14:46.776 05:12:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:46.776 05:12:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:46.776 05:12:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:46.776 05:12:05 -- common/autotest_common.sh@861 -- # break 00:14:46.776 05:12:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:46.776 05:12:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:46.776 05:12:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.035 1+0 records in 00:14:47.035 1+0 records out 00:14:47.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293338 s, 14.0 MB/s 00:14:47.035 05:12:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.035 05:12:05 -- common/autotest_common.sh@874 -- # size=4096 00:14:47.035 05:12:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.035 05:12:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:47.035 05:12:05 -- common/autotest_common.sh@877 -- # return 0 00:14:47.035 05:12:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.035 05:12:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.035 05:12:05 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:47.035 05:12:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:47.035 05:12:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:47.293 05:12:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:47.293 { 00:14:47.293 "nbd_device": "/dev/nbd0", 00:14:47.293 "bdev_name": "raid" 00:14:47.293 } 00:14:47.293 ]' 00:14:47.293 05:12:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:47.293 05:12:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:47.293 { 00:14:47.293 "nbd_device": "/dev/nbd0", 00:14:47.293 "bdev_name": "raid" 00:14:47.293 } 00:14:47.293 ]' 00:14:47.293 05:12:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:47.293 05:12:06 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:47.293 05:12:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:47.293 05:12:06 -- bdev/nbd_common.sh@65 -- # count=1 00:14:47.293 05:12:06 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:47.293 4096+0 records in 00:14:47.293 4096+0 records out 00:14:47.293 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0196946 s, 106 MB/s 00:14:47.293 05:12:06 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:47.552 4096+0 records in 00:14:47.552 4096+0 records out 00:14:47.552 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.314618 s, 6.7 MB/s 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:47.552 128+0 records in 00:14:47.552 128+0 records out 00:14:47.552 65536 bytes (66 kB, 64 KiB) copied, 0.000535331 s, 122 MB/s 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:47.552 2035+0 records in 00:14:47.552 2035+0 records out 00:14:47.552 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00579622 s, 180 MB/s 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:47.552 456+0 records in 00:14:47.552 456+0 records out 00:14:47.552 233472 bytes (233 kB, 228 KiB) copied, 0.0014788 s, 158 MB/s 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:47.552 05:12:06 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:47.552 05:12:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:47.552 05:12:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:47.552 05:12:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:47.552 05:12:06 -- bdev/nbd_common.sh@51 -- # local i 00:14:47.552 05:12:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.552 05:12:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:47.811 [2024-07-26 05:12:06.871726] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.811 05:12:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:47.811 05:12:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:47.811 05:12:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:47.811 05:12:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.811 05:12:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.811 05:12:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:47.811 05:12:06 -- bdev/nbd_common.sh@41 -- # break 00:14:47.811 05:12:06 -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.811 05:12:06 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:47.811 05:12:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:47.811 05:12:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:48.069 05:12:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:48.069 05:12:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:48.069 05:12:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:48.070 05:12:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:48.070 05:12:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:48.070 05:12:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:48.070 05:12:07 -- bdev/nbd_common.sh@65 -- # true 00:14:48.070 05:12:07 -- bdev/nbd_common.sh@65 -- # count=0 00:14:48.070 05:12:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:48.070 05:12:07 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:48.070 05:12:07 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:48.070 05:12:07 -- bdev/bdev_raid.sh@111 -- # killprocess 68311 00:14:48.070 05:12:07 -- common/autotest_common.sh@926 -- # '[' -z 68311 ']' 00:14:48.070 05:12:07 -- common/autotest_common.sh@930 -- # kill -0 68311 00:14:48.070 05:12:07 -- common/autotest_common.sh@931 -- # uname 00:14:48.070 05:12:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:48.070 05:12:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68311 00:14:48.327 05:12:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:48.327 killing process with pid 68311 00:14:48.327 05:12:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:48.327 05:12:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68311' 00:14:48.327 05:12:07 -- common/autotest_common.sh@945 -- # kill 68311 00:14:48.328 [2024-07-26 05:12:07.192788] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.328 05:12:07 -- common/autotest_common.sh@950 -- # wait 68311 00:14:48.328 [2024-07-26 05:12:07.192897] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.328 [2024-07-26 05:12:07.192958] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.328 [2024-07-26 05:12:07.192976] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name raid, state offline 00:14:48.328 [2024-07-26 05:12:07.337507] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.705 05:12:08 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:49.705 00:14:49.705 real 0m4.338s 00:14:49.705 user 0m5.462s 00:14:49.705 sys 0m0.966s 00:14:49.705 05:12:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.705 05:12:08 -- common/autotest_common.sh@10 -- # set +x 00:14:49.705 ************************************ 00:14:49.705 END TEST raid_function_test_raid0 00:14:49.705 ************************************ 00:14:49.705 05:12:08 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:14:49.705 05:12:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:49.705 05:12:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:49.705 05:12:08 -- common/autotest_common.sh@10 -- # set +x 00:14:49.705 ************************************ 00:14:49.705 START TEST raid_function_test_concat 00:14:49.705 ************************************ 00:14:49.705 05:12:08 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:14:49.705 05:12:08 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:14:49.705 05:12:08 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:49.705 05:12:08 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:49.705 05:12:08 -- bdev/bdev_raid.sh@86 -- # raid_pid=68453 00:14:49.705 05:12:08 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 68453' 00:14:49.705 Process raid pid: 68453 00:14:49.705 05:12:08 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:49.705 05:12:08 -- bdev/bdev_raid.sh@88 -- # waitforlisten 68453 /var/tmp/spdk-raid.sock 00:14:49.705 05:12:08 -- common/autotest_common.sh@819 -- # '[' -z 68453 ']' 00:14:49.705 05:12:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:49.705 05:12:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:49.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:49.705 05:12:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:49.705 05:12:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:49.705 05:12:08 -- common/autotest_common.sh@10 -- # set +x 00:14:49.705 [2024-07-26 05:12:08.545069] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:49.705 [2024-07-26 05:12:08.545241] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.705 [2024-07-26 05:12:08.710649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.964 [2024-07-26 05:12:08.886555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.964 [2024-07-26 05:12:09.049030] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.531 05:12:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:50.531 05:12:09 -- common/autotest_common.sh@852 -- # return 0 00:14:50.531 05:12:09 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:14:50.531 05:12:09 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:14:50.531 05:12:09 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:50.531 05:12:09 -- bdev/bdev_raid.sh@70 -- # cat 00:14:50.531 05:12:09 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:50.790 [2024-07-26 05:12:09.754533] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:50.790 [2024-07-26 05:12:09.756472] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:50.790 [2024-07-26 05:12:09.756570] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:14:50.790 [2024-07-26 05:12:09.756591] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:50.790 [2024-07-26 05:12:09.756733] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:50.790 [2024-07-26 05:12:09.757169] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:14:50.790 [2024-07-26 05:12:09.757205] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006f80 00:14:50.790 [2024-07-26 05:12:09.757432] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.790 Base_1 00:14:50.790 Base_2 00:14:50.790 05:12:09 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:50.790 05:12:09 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:50.790 05:12:09 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:51.050 05:12:10 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:51.050 05:12:10 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:51.050 05:12:10 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:51.050 05:12:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.050 05:12:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:51.050 05:12:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:51.050 05:12:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:51.050 05:12:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:51.050 05:12:10 -- bdev/nbd_common.sh@12 -- # local i 00:14:51.050 05:12:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:51.050 05:12:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.050 05:12:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:51.309 [2024-07-26 05:12:10.214694] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:14:51.309 /dev/nbd0 00:14:51.309 05:12:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:51.309 05:12:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:51.309 05:12:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:51.309 05:12:10 -- common/autotest_common.sh@857 -- # local i 00:14:51.309 05:12:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:51.309 05:12:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:51.309 05:12:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:51.309 05:12:10 -- common/autotest_common.sh@861 -- # break 00:14:51.309 05:12:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:51.309 05:12:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:51.309 05:12:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.309 1+0 records in 00:14:51.309 1+0 records out 00:14:51.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262185 s, 15.6 MB/s 00:14:51.309 05:12:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.309 05:12:10 -- common/autotest_common.sh@874 -- # size=4096 00:14:51.309 05:12:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.309 05:12:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:51.309 05:12:10 -- common/autotest_common.sh@877 -- # return 0 00:14:51.309 05:12:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.309 05:12:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.309 05:12:10 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:51.309 05:12:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.309 05:12:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:51.568 05:12:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:51.568 { 00:14:51.568 "nbd_device": "/dev/nbd0", 00:14:51.568 "bdev_name": "raid" 00:14:51.568 } 00:14:51.568 ]' 00:14:51.568 05:12:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:51.568 { 00:14:51.568 "nbd_device": "/dev/nbd0", 00:14:51.568 "bdev_name": "raid" 00:14:51.568 } 00:14:51.568 ]' 00:14:51.568 05:12:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:51.568 05:12:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:51.568 05:12:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:51.568 05:12:10 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:51.568 05:12:10 -- bdev/nbd_common.sh@65 -- # count=1 00:14:51.568 05:12:10 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:51.568 4096+0 records in 00:14:51.568 4096+0 records out 00:14:51.568 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0211981 s, 98.9 MB/s 00:14:51.568 05:12:10 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:51.827 4096+0 records in 00:14:51.827 4096+0 records out 00:14:51.827 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.30908 s, 6.8 MB/s 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:51.827 128+0 records in 00:14:51.827 128+0 records out 00:14:51.827 65536 bytes (66 kB, 64 KiB) copied, 0.000564671 s, 116 MB/s 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:51.827 2035+0 records in 00:14:51.827 2035+0 records out 00:14:51.827 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00368305 s, 283 MB/s 00:14:51.827 05:12:10 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:52.086 456+0 records in 00:14:52.086 456+0 records out 00:14:52.086 233472 bytes (233 kB, 228 KiB) copied, 0.00151897 s, 154 MB/s 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:52.086 05:12:10 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:52.086 05:12:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:52.086 05:12:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:52.086 05:12:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:52.086 05:12:10 -- bdev/nbd_common.sh@51 -- # local i 00:14:52.086 05:12:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.086 05:12:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:52.345 05:12:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:52.345 [2024-07-26 05:12:11.231691] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.345 05:12:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:52.345 05:12:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:52.345 05:12:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:52.345 05:12:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:52.345 05:12:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:52.345 05:12:11 -- bdev/nbd_common.sh@41 -- # break 00:14:52.345 05:12:11 -- bdev/nbd_common.sh@45 -- # return 0 00:14:52.345 05:12:11 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:52.345 05:12:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:52.346 05:12:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:52.605 05:12:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:52.605 05:12:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:52.605 05:12:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:52.605 05:12:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:52.605 05:12:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:52.605 05:12:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:52.605 05:12:11 -- bdev/nbd_common.sh@65 -- # true 00:14:52.605 05:12:11 -- bdev/nbd_common.sh@65 -- # count=0 00:14:52.605 05:12:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:52.605 05:12:11 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:52.605 05:12:11 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:52.605 05:12:11 -- bdev/bdev_raid.sh@111 -- # killprocess 68453 00:14:52.605 05:12:11 -- common/autotest_common.sh@926 -- # '[' -z 68453 ']' 00:14:52.605 05:12:11 -- common/autotest_common.sh@930 -- # kill -0 68453 00:14:52.605 05:12:11 -- common/autotest_common.sh@931 -- # uname 00:14:52.605 05:12:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:52.605 05:12:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68453 00:14:52.605 05:12:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:52.605 05:12:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:52.605 killing process with pid 68453 00:14:52.605 05:12:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68453' 00:14:52.605 05:12:11 -- common/autotest_common.sh@945 -- # kill 68453 00:14:52.605 [2024-07-26 05:12:11.593509] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.605 [2024-07-26 05:12:11.593609] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.605 [2024-07-26 05:12:11.593679] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.605 [2024-07-26 05:12:11.593698] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name raid, state offline 00:14:52.605 05:12:11 -- common/autotest_common.sh@950 -- # wait 68453 00:14:52.864 [2024-07-26 05:12:11.751482] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:53.801 00:14:53.801 real 0m4.369s 00:14:53.801 user 0m5.569s 00:14:53.801 sys 0m0.920s 00:14:53.801 05:12:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.801 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:14:53.801 ************************************ 00:14:53.801 END TEST raid_function_test_concat 00:14:53.801 ************************************ 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:14:53.801 05:12:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:53.801 05:12:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:53.801 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:14:53.801 ************************************ 00:14:53.801 START TEST raid0_resize_test 00:14:53.801 ************************************ 00:14:53.801 05:12:12 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@301 -- # raid_pid=68601 00:14:53.801 Process raid pid: 68601 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 68601' 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@303 -- # waitforlisten 68601 /var/tmp/spdk-raid.sock 00:14:53.801 05:12:12 -- common/autotest_common.sh@819 -- # '[' -z 68601 ']' 00:14:53.801 05:12:12 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:53.801 05:12:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:53.801 05:12:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:53.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:53.801 05:12:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:53.801 05:12:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:53.801 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:14:54.060 [2024-07-26 05:12:12.964212] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:54.060 [2024-07-26 05:12:12.964364] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.060 [2024-07-26 05:12:13.136593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.329 [2024-07-26 05:12:13.312583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.600 [2024-07-26 05:12:13.467752] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.859 05:12:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:54.859 05:12:13 -- common/autotest_common.sh@852 -- # return 0 00:14:54.859 05:12:13 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:55.118 Base_1 00:14:55.118 05:12:14 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:55.376 Base_2 00:14:55.376 05:12:14 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:55.634 [2024-07-26 05:12:14.585666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:55.634 [2024-07-26 05:12:14.587707] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:55.634 [2024-07-26 05:12:14.587794] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:14:55.634 [2024-07-26 05:12:14.587812] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:55.634 [2024-07-26 05:12:14.587997] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005450 00:14:55.634 [2024-07-26 05:12:14.588367] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:14:55.634 [2024-07-26 05:12:14.588392] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000006f80 00:14:55.634 [2024-07-26 05:12:14.588573] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.634 05:12:14 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:55.892 [2024-07-26 05:12:14.781741] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:55.892 [2024-07-26 05:12:14.781799] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:55.892 true 00:14:55.892 05:12:14 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:55.892 05:12:14 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:14:56.150 [2024-07-26 05:12:15.034012] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.150 05:12:15 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:14:56.150 05:12:15 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:14:56.150 05:12:15 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:14:56.150 05:12:15 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:56.150 [2024-07-26 05:12:15.249875] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:56.150 [2024-07-26 05:12:15.249916] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:56.150 [2024-07-26 05:12:15.249957] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:14:56.150 [2024-07-26 05:12:15.249985] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:56.150 true 00:14:56.408 05:12:15 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:56.408 05:12:15 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:14:56.408 [2024-07-26 05:12:15.470124] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.408 05:12:15 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:14:56.408 05:12:15 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:14:56.408 05:12:15 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:14:56.408 05:12:15 -- bdev/bdev_raid.sh@332 -- # killprocess 68601 00:14:56.408 05:12:15 -- common/autotest_common.sh@926 -- # '[' -z 68601 ']' 00:14:56.408 05:12:15 -- common/autotest_common.sh@930 -- # kill -0 68601 00:14:56.408 05:12:15 -- common/autotest_common.sh@931 -- # uname 00:14:56.408 05:12:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:56.408 05:12:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68601 00:14:56.667 05:12:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:56.667 05:12:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:56.667 killing process with pid 68601 00:14:56.667 05:12:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68601' 00:14:56.667 05:12:15 -- common/autotest_common.sh@945 -- # kill 68601 00:14:56.667 [2024-07-26 05:12:15.521364] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.667 05:12:15 -- common/autotest_common.sh@950 -- # wait 68601 00:14:56.667 [2024-07-26 05:12:15.521481] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.667 [2024-07-26 05:12:15.521547] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.667 [2024-07-26 05:12:15.521566] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Raid, state offline 00:14:56.667 [2024-07-26 05:12:15.522282] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@334 -- # return 0 00:14:57.603 00:14:57.603 real 0m3.632s 00:14:57.603 user 0m5.106s 00:14:57.603 sys 0m0.506s 00:14:57.603 05:12:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.603 05:12:16 -- common/autotest_common.sh@10 -- # set +x 00:14:57.603 ************************************ 00:14:57.603 END TEST raid0_resize_test 00:14:57.603 ************************************ 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:57.603 05:12:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:57.603 05:12:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:57.603 05:12:16 -- common/autotest_common.sh@10 -- # set +x 00:14:57.603 ************************************ 00:14:57.603 START TEST raid_state_function_test 00:14:57.603 ************************************ 00:14:57.603 05:12:16 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=68678 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:57.603 Process raid pid: 68678 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 68678' 00:14:57.603 05:12:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 68678 /var/tmp/spdk-raid.sock 00:14:57.603 05:12:16 -- common/autotest_common.sh@819 -- # '[' -z 68678 ']' 00:14:57.603 05:12:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:57.603 05:12:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:57.603 05:12:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:57.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:57.603 05:12:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:57.603 05:12:16 -- common/autotest_common.sh@10 -- # set +x 00:14:57.603 [2024-07-26 05:12:16.654123] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:57.603 [2024-07-26 05:12:16.654299] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.862 [2024-07-26 05:12:16.823104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.120 [2024-07-26 05:12:16.989045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.120 [2024-07-26 05:12:17.141413] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.688 05:12:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.688 05:12:17 -- common/autotest_common.sh@852 -- # return 0 00:14:58.688 05:12:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:58.946 [2024-07-26 05:12:17.805226] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.946 [2024-07-26 05:12:17.805316] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.946 [2024-07-26 05:12:17.805330] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.946 [2024-07-26 05:12:17.805344] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.946 05:12:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.204 05:12:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:59.204 "name": "Existed_Raid", 00:14:59.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.204 "strip_size_kb": 64, 00:14:59.204 "state": "configuring", 00:14:59.204 "raid_level": "raid0", 00:14:59.204 "superblock": false, 00:14:59.204 "num_base_bdevs": 2, 00:14:59.204 "num_base_bdevs_discovered": 0, 00:14:59.204 "num_base_bdevs_operational": 2, 00:14:59.204 "base_bdevs_list": [ 00:14:59.204 { 00:14:59.204 "name": "BaseBdev1", 00:14:59.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.205 "is_configured": false, 00:14:59.205 "data_offset": 0, 00:14:59.205 "data_size": 0 00:14:59.205 }, 00:14:59.205 { 00:14:59.205 "name": "BaseBdev2", 00:14:59.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.205 "is_configured": false, 00:14:59.205 "data_offset": 0, 00:14:59.205 "data_size": 0 00:14:59.205 } 00:14:59.205 ] 00:14:59.205 }' 00:14:59.205 05:12:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:59.205 05:12:18 -- common/autotest_common.sh@10 -- # set +x 00:14:59.463 05:12:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:59.721 [2024-07-26 05:12:18.577303] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.721 [2024-07-26 05:12:18.577374] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:59.721 05:12:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:59.721 [2024-07-26 05:12:18.765416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.721 [2024-07-26 05:12:18.765478] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.721 [2024-07-26 05:12:18.765499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.721 [2024-07-26 05:12:18.765514] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.721 05:12:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:59.980 [2024-07-26 05:12:19.080146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.980 BaseBdev1 00:15:00.238 05:12:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:00.239 05:12:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:00.239 05:12:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:00.239 05:12:19 -- common/autotest_common.sh@889 -- # local i 00:15:00.239 05:12:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:00.239 05:12:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:00.239 05:12:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:00.239 05:12:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.497 [ 00:15:00.497 { 00:15:00.497 "name": "BaseBdev1", 00:15:00.497 "aliases": [ 00:15:00.497 "47398813-6f51-42b5-b4f9-25cba3b576c8" 00:15:00.497 ], 00:15:00.497 "product_name": "Malloc disk", 00:15:00.497 "block_size": 512, 00:15:00.497 "num_blocks": 65536, 00:15:00.497 "uuid": "47398813-6f51-42b5-b4f9-25cba3b576c8", 00:15:00.497 "assigned_rate_limits": { 00:15:00.497 "rw_ios_per_sec": 0, 00:15:00.497 "rw_mbytes_per_sec": 0, 00:15:00.497 "r_mbytes_per_sec": 0, 00:15:00.497 "w_mbytes_per_sec": 0 00:15:00.497 }, 00:15:00.497 "claimed": true, 00:15:00.497 "claim_type": "exclusive_write", 00:15:00.497 "zoned": false, 00:15:00.497 "supported_io_types": { 00:15:00.497 "read": true, 00:15:00.497 "write": true, 00:15:00.497 "unmap": true, 00:15:00.497 "write_zeroes": true, 00:15:00.497 "flush": true, 00:15:00.497 "reset": true, 00:15:00.497 "compare": false, 00:15:00.497 "compare_and_write": false, 00:15:00.497 "abort": true, 00:15:00.497 "nvme_admin": false, 00:15:00.497 "nvme_io": false 00:15:00.497 }, 00:15:00.497 "memory_domains": [ 00:15:00.497 { 00:15:00.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.497 "dma_device_type": 2 00:15:00.497 } 00:15:00.497 ], 00:15:00.497 "driver_specific": {} 00:15:00.497 } 00:15:00.497 ] 00:15:00.497 05:12:19 -- common/autotest_common.sh@895 -- # return 0 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.497 05:12:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.756 05:12:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.756 "name": "Existed_Raid", 00:15:00.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.756 "strip_size_kb": 64, 00:15:00.756 "state": "configuring", 00:15:00.756 "raid_level": "raid0", 00:15:00.756 "superblock": false, 00:15:00.756 "num_base_bdevs": 2, 00:15:00.756 "num_base_bdevs_discovered": 1, 00:15:00.756 "num_base_bdevs_operational": 2, 00:15:00.756 "base_bdevs_list": [ 00:15:00.756 { 00:15:00.756 "name": "BaseBdev1", 00:15:00.756 "uuid": "47398813-6f51-42b5-b4f9-25cba3b576c8", 00:15:00.756 "is_configured": true, 00:15:00.756 "data_offset": 0, 00:15:00.756 "data_size": 65536 00:15:00.756 }, 00:15:00.756 { 00:15:00.756 "name": "BaseBdev2", 00:15:00.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.756 "is_configured": false, 00:15:00.757 "data_offset": 0, 00:15:00.757 "data_size": 0 00:15:00.757 } 00:15:00.757 ] 00:15:00.757 }' 00:15:00.757 05:12:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.757 05:12:19 -- common/autotest_common.sh@10 -- # set +x 00:15:01.015 05:12:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:01.274 [2024-07-26 05:12:20.248640] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.274 [2024-07-26 05:12:20.248714] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:01.274 05:12:20 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:01.274 05:12:20 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:01.569 [2024-07-26 05:12:20.500733] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.569 [2024-07-26 05:12:20.502802] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.569 [2024-07-26 05:12:20.502865] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.569 05:12:20 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:01.569 05:12:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.570 05:12:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.829 05:12:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.829 "name": "Existed_Raid", 00:15:01.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.829 "strip_size_kb": 64, 00:15:01.829 "state": "configuring", 00:15:01.829 "raid_level": "raid0", 00:15:01.829 "superblock": false, 00:15:01.829 "num_base_bdevs": 2, 00:15:01.829 "num_base_bdevs_discovered": 1, 00:15:01.829 "num_base_bdevs_operational": 2, 00:15:01.829 "base_bdevs_list": [ 00:15:01.829 { 00:15:01.829 "name": "BaseBdev1", 00:15:01.829 "uuid": "47398813-6f51-42b5-b4f9-25cba3b576c8", 00:15:01.829 "is_configured": true, 00:15:01.829 "data_offset": 0, 00:15:01.829 "data_size": 65536 00:15:01.829 }, 00:15:01.829 { 00:15:01.829 "name": "BaseBdev2", 00:15:01.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.829 "is_configured": false, 00:15:01.829 "data_offset": 0, 00:15:01.829 "data_size": 0 00:15:01.829 } 00:15:01.829 ] 00:15:01.829 }' 00:15:01.829 05:12:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.829 05:12:20 -- common/autotest_common.sh@10 -- # set +x 00:15:02.089 05:12:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:02.347 [2024-07-26 05:12:21.215459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.347 [2024-07-26 05:12:21.215526] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:15:02.347 [2024-07-26 05:12:21.215539] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:02.347 [2024-07-26 05:12:21.215661] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:15:02.347 [2024-07-26 05:12:21.216122] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:15:02.347 [2024-07-26 05:12:21.216167] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:15:02.347 [2024-07-26 05:12:21.216482] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.347 BaseBdev2 00:15:02.347 05:12:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:02.347 05:12:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:02.347 05:12:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:02.347 05:12:21 -- common/autotest_common.sh@889 -- # local i 00:15:02.347 05:12:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:02.348 05:12:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:02.348 05:12:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:02.606 05:12:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:02.865 [ 00:15:02.865 { 00:15:02.865 "name": "BaseBdev2", 00:15:02.865 "aliases": [ 00:15:02.865 "ff833ee0-8b83-4018-964b-0070bb33ef15" 00:15:02.865 ], 00:15:02.865 "product_name": "Malloc disk", 00:15:02.865 "block_size": 512, 00:15:02.865 "num_blocks": 65536, 00:15:02.865 "uuid": "ff833ee0-8b83-4018-964b-0070bb33ef15", 00:15:02.865 "assigned_rate_limits": { 00:15:02.865 "rw_ios_per_sec": 0, 00:15:02.865 "rw_mbytes_per_sec": 0, 00:15:02.865 "r_mbytes_per_sec": 0, 00:15:02.865 "w_mbytes_per_sec": 0 00:15:02.865 }, 00:15:02.865 "claimed": true, 00:15:02.865 "claim_type": "exclusive_write", 00:15:02.865 "zoned": false, 00:15:02.865 "supported_io_types": { 00:15:02.865 "read": true, 00:15:02.865 "write": true, 00:15:02.865 "unmap": true, 00:15:02.865 "write_zeroes": true, 00:15:02.865 "flush": true, 00:15:02.865 "reset": true, 00:15:02.865 "compare": false, 00:15:02.865 "compare_and_write": false, 00:15:02.865 "abort": true, 00:15:02.865 "nvme_admin": false, 00:15:02.865 "nvme_io": false 00:15:02.865 }, 00:15:02.865 "memory_domains": [ 00:15:02.865 { 00:15:02.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.865 "dma_device_type": 2 00:15:02.865 } 00:15:02.865 ], 00:15:02.865 "driver_specific": {} 00:15:02.865 } 00:15:02.865 ] 00:15:02.865 05:12:21 -- common/autotest_common.sh@895 -- # return 0 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:02.865 "name": "Existed_Raid", 00:15:02.865 "uuid": "ecfe90d8-5e76-4148-8a37-c59f7a346ce8", 00:15:02.865 "strip_size_kb": 64, 00:15:02.865 "state": "online", 00:15:02.865 "raid_level": "raid0", 00:15:02.865 "superblock": false, 00:15:02.865 "num_base_bdevs": 2, 00:15:02.865 "num_base_bdevs_discovered": 2, 00:15:02.865 "num_base_bdevs_operational": 2, 00:15:02.865 "base_bdevs_list": [ 00:15:02.865 { 00:15:02.865 "name": "BaseBdev1", 00:15:02.865 "uuid": "47398813-6f51-42b5-b4f9-25cba3b576c8", 00:15:02.865 "is_configured": true, 00:15:02.865 "data_offset": 0, 00:15:02.865 "data_size": 65536 00:15:02.865 }, 00:15:02.865 { 00:15:02.865 "name": "BaseBdev2", 00:15:02.865 "uuid": "ff833ee0-8b83-4018-964b-0070bb33ef15", 00:15:02.865 "is_configured": true, 00:15:02.865 "data_offset": 0, 00:15:02.865 "data_size": 65536 00:15:02.865 } 00:15:02.865 ] 00:15:02.865 }' 00:15:02.865 05:12:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:02.865 05:12:21 -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:03.432 [2024-07-26 05:12:22.443898] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.432 [2024-07-26 05:12:22.443934] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.432 [2024-07-26 05:12:22.444009] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.432 05:12:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.691 05:12:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.691 "name": "Existed_Raid", 00:15:03.691 "uuid": "ecfe90d8-5e76-4148-8a37-c59f7a346ce8", 00:15:03.691 "strip_size_kb": 64, 00:15:03.691 "state": "offline", 00:15:03.691 "raid_level": "raid0", 00:15:03.691 "superblock": false, 00:15:03.691 "num_base_bdevs": 2, 00:15:03.691 "num_base_bdevs_discovered": 1, 00:15:03.691 "num_base_bdevs_operational": 1, 00:15:03.691 "base_bdevs_list": [ 00:15:03.691 { 00:15:03.691 "name": null, 00:15:03.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.691 "is_configured": false, 00:15:03.691 "data_offset": 0, 00:15:03.691 "data_size": 65536 00:15:03.691 }, 00:15:03.691 { 00:15:03.691 "name": "BaseBdev2", 00:15:03.691 "uuid": "ff833ee0-8b83-4018-964b-0070bb33ef15", 00:15:03.691 "is_configured": true, 00:15:03.691 "data_offset": 0, 00:15:03.691 "data_size": 65536 00:15:03.691 } 00:15:03.691 ] 00:15:03.691 }' 00:15:03.691 05:12:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.691 05:12:22 -- common/autotest_common.sh@10 -- # set +x 00:15:04.258 05:12:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:04.258 05:12:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:04.258 05:12:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.258 05:12:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:04.258 05:12:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:04.258 05:12:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.258 05:12:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:04.516 [2024-07-26 05:12:23.591438] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:04.516 [2024-07-26 05:12:23.591685] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:15:04.774 05:12:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:04.774 05:12:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:04.774 05:12:23 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.774 05:12:23 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:05.033 05:12:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:05.033 05:12:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:05.033 05:12:23 -- bdev/bdev_raid.sh@287 -- # killprocess 68678 00:15:05.033 05:12:23 -- common/autotest_common.sh@926 -- # '[' -z 68678 ']' 00:15:05.033 05:12:23 -- common/autotest_common.sh@930 -- # kill -0 68678 00:15:05.033 05:12:23 -- common/autotest_common.sh@931 -- # uname 00:15:05.033 05:12:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:05.033 05:12:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68678 00:15:05.033 killing process with pid 68678 00:15:05.033 05:12:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:05.033 05:12:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:05.033 05:12:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68678' 00:15:05.033 05:12:23 -- common/autotest_common.sh@945 -- # kill 68678 00:15:05.033 [2024-07-26 05:12:23.951805] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.033 05:12:23 -- common/autotest_common.sh@950 -- # wait 68678 00:15:05.033 [2024-07-26 05:12:23.951911] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:05.969 05:12:24 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:05.969 00:15:05.969 real 0m8.370s 00:15:05.969 user 0m13.682s 00:15:05.969 sys 0m1.247s 00:15:05.969 05:12:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.969 ************************************ 00:15:05.969 END TEST raid_state_function_test 00:15:05.969 ************************************ 00:15:05.969 05:12:24 -- common/autotest_common.sh@10 -- # set +x 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:15:05.969 05:12:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:05.969 05:12:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:05.969 05:12:25 -- common/autotest_common.sh@10 -- # set +x 00:15:05.969 ************************************ 00:15:05.969 START TEST raid_state_function_test_sb 00:15:05.969 ************************************ 00:15:05.969 05:12:25 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:05.969 Process raid pid: 68960 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=68960 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 68960' 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:05.969 05:12:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 68960 /var/tmp/spdk-raid.sock 00:15:05.969 05:12:25 -- common/autotest_common.sh@819 -- # '[' -z 68960 ']' 00:15:05.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:05.969 05:12:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:05.969 05:12:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:05.969 05:12:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:05.969 05:12:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:05.969 05:12:25 -- common/autotest_common.sh@10 -- # set +x 00:15:06.227 [2024-07-26 05:12:25.088547] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:06.227 [2024-07-26 05:12:25.088686] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.227 [2024-07-26 05:12:25.260632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.485 [2024-07-26 05:12:25.431119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.485 [2024-07-26 05:12:25.585339] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.052 05:12:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:07.052 05:12:26 -- common/autotest_common.sh@852 -- # return 0 00:15:07.052 05:12:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:07.310 [2024-07-26 05:12:26.216834] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:07.310 [2024-07-26 05:12:26.216919] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:07.310 [2024-07-26 05:12:26.216934] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.310 [2024-07-26 05:12:26.216948] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.310 05:12:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.568 05:12:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:07.568 "name": "Existed_Raid", 00:15:07.568 "uuid": "1b10499b-e3d6-488f-ad32-a23e3fcd9666", 00:15:07.568 "strip_size_kb": 64, 00:15:07.568 "state": "configuring", 00:15:07.568 "raid_level": "raid0", 00:15:07.568 "superblock": true, 00:15:07.568 "num_base_bdevs": 2, 00:15:07.568 "num_base_bdevs_discovered": 0, 00:15:07.568 "num_base_bdevs_operational": 2, 00:15:07.568 "base_bdevs_list": [ 00:15:07.568 { 00:15:07.568 "name": "BaseBdev1", 00:15:07.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.568 "is_configured": false, 00:15:07.568 "data_offset": 0, 00:15:07.568 "data_size": 0 00:15:07.568 }, 00:15:07.568 { 00:15:07.568 "name": "BaseBdev2", 00:15:07.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.568 "is_configured": false, 00:15:07.568 "data_offset": 0, 00:15:07.568 "data_size": 0 00:15:07.568 } 00:15:07.568 ] 00:15:07.568 }' 00:15:07.568 05:12:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:07.568 05:12:26 -- common/autotest_common.sh@10 -- # set +x 00:15:07.826 05:12:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:08.084 [2024-07-26 05:12:27.084899] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:08.084 [2024-07-26 05:12:27.084951] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:08.084 05:12:27 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:08.342 [2024-07-26 05:12:27.321003] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:08.342 [2024-07-26 05:12:27.321080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:08.342 [2024-07-26 05:12:27.321102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.342 [2024-07-26 05:12:27.321118] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.342 05:12:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:08.599 [2024-07-26 05:12:27.599055] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.599 BaseBdev1 00:15:08.599 05:12:27 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:08.599 05:12:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:08.599 05:12:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:08.599 05:12:27 -- common/autotest_common.sh@889 -- # local i 00:15:08.599 05:12:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:08.599 05:12:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:08.599 05:12:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:08.857 05:12:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:09.120 [ 00:15:09.120 { 00:15:09.120 "name": "BaseBdev1", 00:15:09.120 "aliases": [ 00:15:09.120 "354b6e86-19fa-42fb-a7fa-ab312aca63a7" 00:15:09.120 ], 00:15:09.120 "product_name": "Malloc disk", 00:15:09.120 "block_size": 512, 00:15:09.120 "num_blocks": 65536, 00:15:09.120 "uuid": "354b6e86-19fa-42fb-a7fa-ab312aca63a7", 00:15:09.120 "assigned_rate_limits": { 00:15:09.120 "rw_ios_per_sec": 0, 00:15:09.120 "rw_mbytes_per_sec": 0, 00:15:09.120 "r_mbytes_per_sec": 0, 00:15:09.120 "w_mbytes_per_sec": 0 00:15:09.120 }, 00:15:09.120 "claimed": true, 00:15:09.120 "claim_type": "exclusive_write", 00:15:09.120 "zoned": false, 00:15:09.120 "supported_io_types": { 00:15:09.120 "read": true, 00:15:09.120 "write": true, 00:15:09.120 "unmap": true, 00:15:09.120 "write_zeroes": true, 00:15:09.120 "flush": true, 00:15:09.120 "reset": true, 00:15:09.120 "compare": false, 00:15:09.120 "compare_and_write": false, 00:15:09.120 "abort": true, 00:15:09.120 "nvme_admin": false, 00:15:09.120 "nvme_io": false 00:15:09.120 }, 00:15:09.120 "memory_domains": [ 00:15:09.120 { 00:15:09.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.120 "dma_device_type": 2 00:15:09.120 } 00:15:09.120 ], 00:15:09.120 "driver_specific": {} 00:15:09.120 } 00:15:09.120 ] 00:15:09.120 05:12:28 -- common/autotest_common.sh@895 -- # return 0 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.120 05:12:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.387 05:12:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.387 "name": "Existed_Raid", 00:15:09.387 "uuid": "1586c8ac-2942-4411-8aeb-8e319c6c365d", 00:15:09.387 "strip_size_kb": 64, 00:15:09.387 "state": "configuring", 00:15:09.387 "raid_level": "raid0", 00:15:09.387 "superblock": true, 00:15:09.387 "num_base_bdevs": 2, 00:15:09.387 "num_base_bdevs_discovered": 1, 00:15:09.387 "num_base_bdevs_operational": 2, 00:15:09.387 "base_bdevs_list": [ 00:15:09.387 { 00:15:09.387 "name": "BaseBdev1", 00:15:09.387 "uuid": "354b6e86-19fa-42fb-a7fa-ab312aca63a7", 00:15:09.387 "is_configured": true, 00:15:09.387 "data_offset": 2048, 00:15:09.387 "data_size": 63488 00:15:09.387 }, 00:15:09.387 { 00:15:09.387 "name": "BaseBdev2", 00:15:09.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.387 "is_configured": false, 00:15:09.387 "data_offset": 0, 00:15:09.387 "data_size": 0 00:15:09.387 } 00:15:09.387 ] 00:15:09.387 }' 00:15:09.387 05:12:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.387 05:12:28 -- common/autotest_common.sh@10 -- # set +x 00:15:09.646 05:12:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:09.905 [2024-07-26 05:12:28.839527] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:09.905 [2024-07-26 05:12:28.839586] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:09.905 05:12:28 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:09.905 05:12:28 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:10.163 05:12:29 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.421 BaseBdev1 00:15:10.421 05:12:29 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:10.421 05:12:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:10.421 05:12:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:10.421 05:12:29 -- common/autotest_common.sh@889 -- # local i 00:15:10.421 05:12:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:10.421 05:12:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:10.421 05:12:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:10.679 05:12:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:10.679 [ 00:15:10.679 { 00:15:10.679 "name": "BaseBdev1", 00:15:10.679 "aliases": [ 00:15:10.679 "5680d963-280f-473e-8696-7a1eb4a98d65" 00:15:10.679 ], 00:15:10.679 "product_name": "Malloc disk", 00:15:10.679 "block_size": 512, 00:15:10.679 "num_blocks": 65536, 00:15:10.679 "uuid": "5680d963-280f-473e-8696-7a1eb4a98d65", 00:15:10.679 "assigned_rate_limits": { 00:15:10.679 "rw_ios_per_sec": 0, 00:15:10.679 "rw_mbytes_per_sec": 0, 00:15:10.679 "r_mbytes_per_sec": 0, 00:15:10.679 "w_mbytes_per_sec": 0 00:15:10.679 }, 00:15:10.679 "claimed": false, 00:15:10.679 "zoned": false, 00:15:10.679 "supported_io_types": { 00:15:10.679 "read": true, 00:15:10.679 "write": true, 00:15:10.680 "unmap": true, 00:15:10.680 "write_zeroes": true, 00:15:10.680 "flush": true, 00:15:10.680 "reset": true, 00:15:10.680 "compare": false, 00:15:10.680 "compare_and_write": false, 00:15:10.680 "abort": true, 00:15:10.680 "nvme_admin": false, 00:15:10.680 "nvme_io": false 00:15:10.680 }, 00:15:10.680 "memory_domains": [ 00:15:10.680 { 00:15:10.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.680 "dma_device_type": 2 00:15:10.680 } 00:15:10.680 ], 00:15:10.680 "driver_specific": {} 00:15:10.680 } 00:15:10.680 ] 00:15:10.680 05:12:29 -- common/autotest_common.sh@895 -- # return 0 00:15:10.680 05:12:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:10.938 [2024-07-26 05:12:29.930719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.938 [2024-07-26 05:12:29.932839] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.938 [2024-07-26 05:12:29.932940] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.938 05:12:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.196 05:12:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.196 "name": "Existed_Raid", 00:15:11.196 "uuid": "cb66dfb5-1761-4efb-a422-ee635f61671f", 00:15:11.196 "strip_size_kb": 64, 00:15:11.196 "state": "configuring", 00:15:11.196 "raid_level": "raid0", 00:15:11.196 "superblock": true, 00:15:11.196 "num_base_bdevs": 2, 00:15:11.196 "num_base_bdevs_discovered": 1, 00:15:11.196 "num_base_bdevs_operational": 2, 00:15:11.196 "base_bdevs_list": [ 00:15:11.196 { 00:15:11.196 "name": "BaseBdev1", 00:15:11.196 "uuid": "5680d963-280f-473e-8696-7a1eb4a98d65", 00:15:11.196 "is_configured": true, 00:15:11.196 "data_offset": 2048, 00:15:11.196 "data_size": 63488 00:15:11.196 }, 00:15:11.196 { 00:15:11.196 "name": "BaseBdev2", 00:15:11.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.196 "is_configured": false, 00:15:11.196 "data_offset": 0, 00:15:11.196 "data_size": 0 00:15:11.196 } 00:15:11.196 ] 00:15:11.196 }' 00:15:11.196 05:12:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.196 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:15:11.454 05:12:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:11.712 [2024-07-26 05:12:30.761383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.712 [2024-07-26 05:12:30.761903] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:15:11.712 [2024-07-26 05:12:30.761931] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:11.712 [2024-07-26 05:12:30.762120] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:11.712 BaseBdev2 00:15:11.712 [2024-07-26 05:12:30.762568] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:15:11.712 [2024-07-26 05:12:30.762596] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:15:11.712 [2024-07-26 05:12:30.762774] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.712 05:12:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:11.712 05:12:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:11.712 05:12:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:11.712 05:12:30 -- common/autotest_common.sh@889 -- # local i 00:15:11.712 05:12:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:11.712 05:12:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:11.712 05:12:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.970 05:12:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:12.228 [ 00:15:12.228 { 00:15:12.228 "name": "BaseBdev2", 00:15:12.228 "aliases": [ 00:15:12.228 "0e592da3-9656-4abd-a3a2-f53f7d77ffc8" 00:15:12.228 ], 00:15:12.228 "product_name": "Malloc disk", 00:15:12.228 "block_size": 512, 00:15:12.228 "num_blocks": 65536, 00:15:12.228 "uuid": "0e592da3-9656-4abd-a3a2-f53f7d77ffc8", 00:15:12.228 "assigned_rate_limits": { 00:15:12.228 "rw_ios_per_sec": 0, 00:15:12.228 "rw_mbytes_per_sec": 0, 00:15:12.228 "r_mbytes_per_sec": 0, 00:15:12.228 "w_mbytes_per_sec": 0 00:15:12.228 }, 00:15:12.228 "claimed": true, 00:15:12.228 "claim_type": "exclusive_write", 00:15:12.228 "zoned": false, 00:15:12.228 "supported_io_types": { 00:15:12.228 "read": true, 00:15:12.228 "write": true, 00:15:12.228 "unmap": true, 00:15:12.228 "write_zeroes": true, 00:15:12.228 "flush": true, 00:15:12.228 "reset": true, 00:15:12.228 "compare": false, 00:15:12.228 "compare_and_write": false, 00:15:12.228 "abort": true, 00:15:12.228 "nvme_admin": false, 00:15:12.228 "nvme_io": false 00:15:12.228 }, 00:15:12.228 "memory_domains": [ 00:15:12.228 { 00:15:12.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.228 "dma_device_type": 2 00:15:12.228 } 00:15:12.228 ], 00:15:12.228 "driver_specific": {} 00:15:12.228 } 00:15:12.228 ] 00:15:12.228 05:12:31 -- common/autotest_common.sh@895 -- # return 0 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.228 05:12:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.486 05:12:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:12.486 "name": "Existed_Raid", 00:15:12.486 "uuid": "cb66dfb5-1761-4efb-a422-ee635f61671f", 00:15:12.486 "strip_size_kb": 64, 00:15:12.486 "state": "online", 00:15:12.486 "raid_level": "raid0", 00:15:12.486 "superblock": true, 00:15:12.486 "num_base_bdevs": 2, 00:15:12.486 "num_base_bdevs_discovered": 2, 00:15:12.486 "num_base_bdevs_operational": 2, 00:15:12.486 "base_bdevs_list": [ 00:15:12.486 { 00:15:12.486 "name": "BaseBdev1", 00:15:12.486 "uuid": "5680d963-280f-473e-8696-7a1eb4a98d65", 00:15:12.486 "is_configured": true, 00:15:12.486 "data_offset": 2048, 00:15:12.486 "data_size": 63488 00:15:12.486 }, 00:15:12.486 { 00:15:12.486 "name": "BaseBdev2", 00:15:12.486 "uuid": "0e592da3-9656-4abd-a3a2-f53f7d77ffc8", 00:15:12.486 "is_configured": true, 00:15:12.486 "data_offset": 2048, 00:15:12.486 "data_size": 63488 00:15:12.486 } 00:15:12.486 ] 00:15:12.486 }' 00:15:12.486 05:12:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:12.486 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:15:12.744 05:12:31 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:13.002 [2024-07-26 05:12:32.041848] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:13.002 [2024-07-26 05:12:32.041937] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.002 [2024-07-26 05:12:32.042021] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.260 05:12:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.518 05:12:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.518 "name": "Existed_Raid", 00:15:13.518 "uuid": "cb66dfb5-1761-4efb-a422-ee635f61671f", 00:15:13.518 "strip_size_kb": 64, 00:15:13.518 "state": "offline", 00:15:13.518 "raid_level": "raid0", 00:15:13.518 "superblock": true, 00:15:13.518 "num_base_bdevs": 2, 00:15:13.518 "num_base_bdevs_discovered": 1, 00:15:13.518 "num_base_bdevs_operational": 1, 00:15:13.518 "base_bdevs_list": [ 00:15:13.518 { 00:15:13.518 "name": null, 00:15:13.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.518 "is_configured": false, 00:15:13.518 "data_offset": 2048, 00:15:13.518 "data_size": 63488 00:15:13.518 }, 00:15:13.518 { 00:15:13.518 "name": "BaseBdev2", 00:15:13.518 "uuid": "0e592da3-9656-4abd-a3a2-f53f7d77ffc8", 00:15:13.518 "is_configured": true, 00:15:13.518 "data_offset": 2048, 00:15:13.518 "data_size": 63488 00:15:13.518 } 00:15:13.518 ] 00:15:13.518 }' 00:15:13.518 05:12:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.518 05:12:32 -- common/autotest_common.sh@10 -- # set +x 00:15:13.777 05:12:32 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:13.777 05:12:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:13.777 05:12:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.777 05:12:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:14.034 05:12:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:14.035 05:12:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:14.035 05:12:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:14.293 [2024-07-26 05:12:33.173781] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.293 [2024-07-26 05:12:33.173849] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:15:14.293 05:12:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:14.293 05:12:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:14.293 05:12:33 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:14.293 05:12:33 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.550 05:12:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:14.550 05:12:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:14.550 05:12:33 -- bdev/bdev_raid.sh@287 -- # killprocess 68960 00:15:14.550 05:12:33 -- common/autotest_common.sh@926 -- # '[' -z 68960 ']' 00:15:14.550 05:12:33 -- common/autotest_common.sh@930 -- # kill -0 68960 00:15:14.550 05:12:33 -- common/autotest_common.sh@931 -- # uname 00:15:14.550 05:12:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:14.550 05:12:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68960 00:15:14.550 killing process with pid 68960 00:15:14.550 05:12:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:14.550 05:12:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:14.550 05:12:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68960' 00:15:14.550 05:12:33 -- common/autotest_common.sh@945 -- # kill 68960 00:15:14.550 [2024-07-26 05:12:33.547260] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.550 05:12:33 -- common/autotest_common.sh@950 -- # wait 68960 00:15:14.550 [2024-07-26 05:12:33.547408] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.484 05:12:34 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:15.484 00:15:15.484 real 0m9.542s 00:15:15.484 user 0m15.760s 00:15:15.484 sys 0m1.367s 00:15:15.484 05:12:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.484 ************************************ 00:15:15.484 END TEST raid_state_function_test_sb 00:15:15.484 ************************************ 00:15:15.484 05:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:15.742 05:12:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:15.742 05:12:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:15.742 05:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:15.742 ************************************ 00:15:15.742 START TEST raid_superblock_test 00:15:15.742 ************************************ 00:15:15.742 05:12:34 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@357 -- # raid_pid=69261 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:15.742 05:12:34 -- bdev/bdev_raid.sh@358 -- # waitforlisten 69261 /var/tmp/spdk-raid.sock 00:15:15.742 05:12:34 -- common/autotest_common.sh@819 -- # '[' -z 69261 ']' 00:15:15.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:15.742 05:12:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:15.742 05:12:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:15.742 05:12:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:15.742 05:12:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:15.742 05:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:15.742 [2024-07-26 05:12:34.681435] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:15.742 [2024-07-26 05:12:34.681603] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69261 ] 00:15:16.000 [2024-07-26 05:12:34.852141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.000 [2024-07-26 05:12:35.013671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.257 [2024-07-26 05:12:35.174291] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.833 05:12:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:16.833 05:12:35 -- common/autotest_common.sh@852 -- # return 0 00:15:16.833 05:12:35 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:16.833 05:12:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:16.833 05:12:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:16.833 05:12:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:16.833 05:12:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:16.833 05:12:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:16.833 05:12:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:16.833 05:12:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:16.833 05:12:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:16.833 malloc1 00:15:16.833 05:12:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:17.107 [2024-07-26 05:12:36.149381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:17.107 [2024-07-26 05:12:36.149681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.107 [2024-07-26 05:12:36.149912] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:15:17.107 [2024-07-26 05:12:36.149941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.107 [2024-07-26 05:12:36.152363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.107 [2024-07-26 05:12:36.152572] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:17.107 pt1 00:15:17.107 05:12:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:17.107 05:12:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:17.107 05:12:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:17.107 05:12:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:17.107 05:12:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:17.107 05:12:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:17.107 05:12:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:17.107 05:12:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:17.107 05:12:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:17.366 malloc2 00:15:17.366 05:12:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:17.624 [2024-07-26 05:12:36.671317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:17.624 [2024-07-26 05:12:36.671577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.624 [2024-07-26 05:12:36.671651] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:15:17.624 [2024-07-26 05:12:36.671920] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.624 [2024-07-26 05:12:36.674238] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.624 [2024-07-26 05:12:36.674476] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:17.624 pt2 00:15:17.624 05:12:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:17.624 05:12:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:17.624 05:12:36 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:17.883 [2024-07-26 05:12:36.855386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:17.883 [2024-07-26 05:12:36.857222] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:17.883 [2024-07-26 05:12:36.857399] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:15:17.883 [2024-07-26 05:12:36.857418] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:17.883 [2024-07-26 05:12:36.857518] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:15:17.883 [2024-07-26 05:12:36.857832] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:15:17.883 [2024-07-26 05:12:36.857854] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:15:17.883 [2024-07-26 05:12:36.858052] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.883 05:12:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.142 05:12:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.142 "name": "raid_bdev1", 00:15:18.142 "uuid": "2f6b2814-97d4-431b-86a5-97df49d95113", 00:15:18.142 "strip_size_kb": 64, 00:15:18.142 "state": "online", 00:15:18.142 "raid_level": "raid0", 00:15:18.142 "superblock": true, 00:15:18.142 "num_base_bdevs": 2, 00:15:18.142 "num_base_bdevs_discovered": 2, 00:15:18.142 "num_base_bdevs_operational": 2, 00:15:18.142 "base_bdevs_list": [ 00:15:18.142 { 00:15:18.142 "name": "pt1", 00:15:18.142 "uuid": "d7d8cced-16a7-5a61-af6f-054e97d7541a", 00:15:18.142 "is_configured": true, 00:15:18.142 "data_offset": 2048, 00:15:18.142 "data_size": 63488 00:15:18.142 }, 00:15:18.142 { 00:15:18.142 "name": "pt2", 00:15:18.142 "uuid": "f45a2c58-69e7-5f17-b9d5-3841fa5e8b0a", 00:15:18.142 "is_configured": true, 00:15:18.142 "data_offset": 2048, 00:15:18.142 "data_size": 63488 00:15:18.142 } 00:15:18.142 ] 00:15:18.142 }' 00:15:18.142 05:12:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.142 05:12:37 -- common/autotest_common.sh@10 -- # set +x 00:15:18.399 05:12:37 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:18.399 05:12:37 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:18.657 [2024-07-26 05:12:37.515825] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.657 05:12:37 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=2f6b2814-97d4-431b-86a5-97df49d95113 00:15:18.657 05:12:37 -- bdev/bdev_raid.sh@380 -- # '[' -z 2f6b2814-97d4-431b-86a5-97df49d95113 ']' 00:15:18.657 05:12:37 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:18.916 [2024-07-26 05:12:37.767671] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.916 [2024-07-26 05:12:37.767702] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.916 [2024-07-26 05:12:37.767776] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.916 [2024-07-26 05:12:37.767862] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.916 [2024-07-26 05:12:37.767876] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:15:18.916 05:12:37 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.916 05:12:37 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:18.916 05:12:37 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:18.916 05:12:37 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:18.916 05:12:37 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:18.916 05:12:37 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:19.175 05:12:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:19.175 05:12:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:19.461 05:12:38 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:19.461 05:12:38 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:19.718 05:12:38 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:19.718 05:12:38 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:19.718 05:12:38 -- common/autotest_common.sh@640 -- # local es=0 00:15:19.718 05:12:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:19.718 05:12:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.718 05:12:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:19.718 05:12:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.719 05:12:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:19.719 05:12:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.719 05:12:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:19.719 05:12:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.719 05:12:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:19.719 05:12:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:19.977 [2024-07-26 05:12:38.847892] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:19.977 [2024-07-26 05:12:38.850077] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:19.977 [2024-07-26 05:12:38.850337] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:19.977 [2024-07-26 05:12:38.850407] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:19.977 [2024-07-26 05:12:38.850434] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.977 [2024-07-26 05:12:38.850448] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:15:19.977 request: 00:15:19.977 { 00:15:19.977 "name": "raid_bdev1", 00:15:19.977 "raid_level": "raid0", 00:15:19.977 "base_bdevs": [ 00:15:19.977 "malloc1", 00:15:19.977 "malloc2" 00:15:19.977 ], 00:15:19.977 "superblock": false, 00:15:19.977 "strip_size_kb": 64, 00:15:19.977 "method": "bdev_raid_create", 00:15:19.977 "req_id": 1 00:15:19.977 } 00:15:19.977 Got JSON-RPC error response 00:15:19.977 response: 00:15:19.977 { 00:15:19.977 "code": -17, 00:15:19.977 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:19.977 } 00:15:19.977 05:12:38 -- common/autotest_common.sh@643 -- # es=1 00:15:19.977 05:12:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:19.977 05:12:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:19.977 05:12:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:19.977 05:12:38 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:19.977 05:12:38 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.236 [2024-07-26 05:12:39.300006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.236 [2024-07-26 05:12:39.300115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.236 [2024-07-26 05:12:39.300169] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:15:20.236 [2024-07-26 05:12:39.300185] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.236 [2024-07-26 05:12:39.302664] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.236 [2024-07-26 05:12:39.302862] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.236 [2024-07-26 05:12:39.303000] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:20.236 [2024-07-26 05:12:39.303091] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:20.236 pt1 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.236 05:12:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.495 05:12:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.495 "name": "raid_bdev1", 00:15:20.495 "uuid": "2f6b2814-97d4-431b-86a5-97df49d95113", 00:15:20.495 "strip_size_kb": 64, 00:15:20.495 "state": "configuring", 00:15:20.495 "raid_level": "raid0", 00:15:20.495 "superblock": true, 00:15:20.495 "num_base_bdevs": 2, 00:15:20.495 "num_base_bdevs_discovered": 1, 00:15:20.495 "num_base_bdevs_operational": 2, 00:15:20.495 "base_bdevs_list": [ 00:15:20.495 { 00:15:20.495 "name": "pt1", 00:15:20.495 "uuid": "d7d8cced-16a7-5a61-af6f-054e97d7541a", 00:15:20.495 "is_configured": true, 00:15:20.495 "data_offset": 2048, 00:15:20.495 "data_size": 63488 00:15:20.495 }, 00:15:20.495 { 00:15:20.495 "name": null, 00:15:20.495 "uuid": "f45a2c58-69e7-5f17-b9d5-3841fa5e8b0a", 00:15:20.495 "is_configured": false, 00:15:20.495 "data_offset": 2048, 00:15:20.495 "data_size": 63488 00:15:20.495 } 00:15:20.495 ] 00:15:20.495 }' 00:15:20.495 05:12:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.495 05:12:39 -- common/autotest_common.sh@10 -- # set +x 00:15:20.752 05:12:39 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:20.752 05:12:39 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:20.752 05:12:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:20.752 05:12:39 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:21.010 [2024-07-26 05:12:40.028221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:21.010 [2024-07-26 05:12:40.028307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.010 [2024-07-26 05:12:40.028351] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:15:21.010 [2024-07-26 05:12:40.028366] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.010 [2024-07-26 05:12:40.028829] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.010 [2024-07-26 05:12:40.028853] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:21.010 [2024-07-26 05:12:40.028948] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:21.010 [2024-07-26 05:12:40.028974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.010 [2024-07-26 05:12:40.029175] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:15:21.010 [2024-07-26 05:12:40.029191] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:21.010 [2024-07-26 05:12:40.029345] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:21.010 [2024-07-26 05:12:40.029717] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:15:21.010 [2024-07-26 05:12:40.029743] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:15:21.010 [2024-07-26 05:12:40.029935] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.010 pt2 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.010 05:12:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.269 05:12:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.269 "name": "raid_bdev1", 00:15:21.269 "uuid": "2f6b2814-97d4-431b-86a5-97df49d95113", 00:15:21.269 "strip_size_kb": 64, 00:15:21.269 "state": "online", 00:15:21.269 "raid_level": "raid0", 00:15:21.269 "superblock": true, 00:15:21.269 "num_base_bdevs": 2, 00:15:21.269 "num_base_bdevs_discovered": 2, 00:15:21.269 "num_base_bdevs_operational": 2, 00:15:21.269 "base_bdevs_list": [ 00:15:21.269 { 00:15:21.269 "name": "pt1", 00:15:21.269 "uuid": "d7d8cced-16a7-5a61-af6f-054e97d7541a", 00:15:21.269 "is_configured": true, 00:15:21.269 "data_offset": 2048, 00:15:21.269 "data_size": 63488 00:15:21.269 }, 00:15:21.269 { 00:15:21.269 "name": "pt2", 00:15:21.269 "uuid": "f45a2c58-69e7-5f17-b9d5-3841fa5e8b0a", 00:15:21.269 "is_configured": true, 00:15:21.269 "data_offset": 2048, 00:15:21.269 "data_size": 63488 00:15:21.269 } 00:15:21.269 ] 00:15:21.269 }' 00:15:21.269 05:12:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.269 05:12:40 -- common/autotest_common.sh@10 -- # set +x 00:15:21.528 05:12:40 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:21.528 05:12:40 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:21.786 [2024-07-26 05:12:40.824720] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.786 05:12:40 -- bdev/bdev_raid.sh@430 -- # '[' 2f6b2814-97d4-431b-86a5-97df49d95113 '!=' 2f6b2814-97d4-431b-86a5-97df49d95113 ']' 00:15:21.786 05:12:40 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:21.786 05:12:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:21.786 05:12:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:21.786 05:12:40 -- bdev/bdev_raid.sh@511 -- # killprocess 69261 00:15:21.786 05:12:40 -- common/autotest_common.sh@926 -- # '[' -z 69261 ']' 00:15:21.786 05:12:40 -- common/autotest_common.sh@930 -- # kill -0 69261 00:15:21.786 05:12:40 -- common/autotest_common.sh@931 -- # uname 00:15:21.786 05:12:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:21.786 05:12:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69261 00:15:21.786 killing process with pid 69261 00:15:21.786 05:12:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:21.786 05:12:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:21.786 05:12:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69261' 00:15:21.786 05:12:40 -- common/autotest_common.sh@945 -- # kill 69261 00:15:21.786 [2024-07-26 05:12:40.882414] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.786 05:12:40 -- common/autotest_common.sh@950 -- # wait 69261 00:15:21.786 [2024-07-26 05:12:40.882520] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.786 [2024-07-26 05:12:40.882574] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.786 [2024-07-26 05:12:40.882596] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:15:22.044 [2024-07-26 05:12:41.038236] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.420 ************************************ 00:15:23.420 END TEST raid_superblock_test 00:15:23.420 ************************************ 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:23.420 00:15:23.420 real 0m7.506s 00:15:23.420 user 0m11.995s 00:15:23.420 sys 0m1.070s 00:15:23.420 05:12:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:23.420 05:12:42 -- common/autotest_common.sh@10 -- # set +x 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:23.420 05:12:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:23.420 05:12:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:23.420 05:12:42 -- common/autotest_common.sh@10 -- # set +x 00:15:23.420 ************************************ 00:15:23.420 START TEST raid_state_function_test 00:15:23.420 ************************************ 00:15:23.420 05:12:42 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@226 -- # raid_pid=69484 00:15:23.420 Process raid pid: 69484 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 69484' 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@228 -- # waitforlisten 69484 /var/tmp/spdk-raid.sock 00:15:23.420 05:12:42 -- common/autotest_common.sh@819 -- # '[' -z 69484 ']' 00:15:23.420 05:12:42 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:23.420 05:12:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:23.420 05:12:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:23.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:23.420 05:12:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:23.420 05:12:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:23.420 05:12:42 -- common/autotest_common.sh@10 -- # set +x 00:15:23.420 [2024-07-26 05:12:42.242228] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:23.420 [2024-07-26 05:12:42.242402] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.420 [2024-07-26 05:12:42.411904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.679 [2024-07-26 05:12:42.576071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.679 [2024-07-26 05:12:42.741843] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.246 05:12:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:24.246 05:12:43 -- common/autotest_common.sh@852 -- # return 0 00:15:24.246 05:12:43 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:24.507 [2024-07-26 05:12:43.407712] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.507 [2024-07-26 05:12:43.407808] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.507 [2024-07-26 05:12:43.407823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.507 [2024-07-26 05:12:43.407837] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.507 05:12:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.768 05:12:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.768 "name": "Existed_Raid", 00:15:24.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.768 "strip_size_kb": 64, 00:15:24.768 "state": "configuring", 00:15:24.768 "raid_level": "concat", 00:15:24.768 "superblock": false, 00:15:24.768 "num_base_bdevs": 2, 00:15:24.768 "num_base_bdevs_discovered": 0, 00:15:24.768 "num_base_bdevs_operational": 2, 00:15:24.768 "base_bdevs_list": [ 00:15:24.768 { 00:15:24.768 "name": "BaseBdev1", 00:15:24.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.768 "is_configured": false, 00:15:24.768 "data_offset": 0, 00:15:24.768 "data_size": 0 00:15:24.768 }, 00:15:24.768 { 00:15:24.768 "name": "BaseBdev2", 00:15:24.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.768 "is_configured": false, 00:15:24.768 "data_offset": 0, 00:15:24.768 "data_size": 0 00:15:24.768 } 00:15:24.768 ] 00:15:24.768 }' 00:15:24.768 05:12:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.768 05:12:43 -- common/autotest_common.sh@10 -- # set +x 00:15:25.026 05:12:44 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:25.285 [2024-07-26 05:12:44.239854] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.285 [2024-07-26 05:12:44.239968] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:25.285 05:12:44 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:25.543 [2024-07-26 05:12:44.443915] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.543 [2024-07-26 05:12:44.444009] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.543 [2024-07-26 05:12:44.444044] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.543 [2024-07-26 05:12:44.444060] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.543 05:12:44 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.802 [2024-07-26 05:12:44.664558] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.802 BaseBdev1 00:15:25.802 05:12:44 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:25.802 05:12:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:25.802 05:12:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:25.802 05:12:44 -- common/autotest_common.sh@889 -- # local i 00:15:25.802 05:12:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:25.802 05:12:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:25.802 05:12:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:26.061 05:12:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:26.061 [ 00:15:26.061 { 00:15:26.061 "name": "BaseBdev1", 00:15:26.061 "aliases": [ 00:15:26.061 "80e30598-3514-490f-9662-2dae66c0aeff" 00:15:26.061 ], 00:15:26.061 "product_name": "Malloc disk", 00:15:26.061 "block_size": 512, 00:15:26.061 "num_blocks": 65536, 00:15:26.061 "uuid": "80e30598-3514-490f-9662-2dae66c0aeff", 00:15:26.061 "assigned_rate_limits": { 00:15:26.061 "rw_ios_per_sec": 0, 00:15:26.061 "rw_mbytes_per_sec": 0, 00:15:26.061 "r_mbytes_per_sec": 0, 00:15:26.061 "w_mbytes_per_sec": 0 00:15:26.061 }, 00:15:26.061 "claimed": true, 00:15:26.061 "claim_type": "exclusive_write", 00:15:26.061 "zoned": false, 00:15:26.061 "supported_io_types": { 00:15:26.061 "read": true, 00:15:26.061 "write": true, 00:15:26.061 "unmap": true, 00:15:26.061 "write_zeroes": true, 00:15:26.061 "flush": true, 00:15:26.061 "reset": true, 00:15:26.061 "compare": false, 00:15:26.061 "compare_and_write": false, 00:15:26.061 "abort": true, 00:15:26.061 "nvme_admin": false, 00:15:26.061 "nvme_io": false 00:15:26.061 }, 00:15:26.061 "memory_domains": [ 00:15:26.061 { 00:15:26.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.061 "dma_device_type": 2 00:15:26.061 } 00:15:26.061 ], 00:15:26.061 "driver_specific": {} 00:15:26.061 } 00:15:26.061 ] 00:15:26.061 05:12:45 -- common/autotest_common.sh@895 -- # return 0 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.061 05:12:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.319 05:12:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.319 "name": "Existed_Raid", 00:15:26.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.319 "strip_size_kb": 64, 00:15:26.319 "state": "configuring", 00:15:26.319 "raid_level": "concat", 00:15:26.319 "superblock": false, 00:15:26.319 "num_base_bdevs": 2, 00:15:26.319 "num_base_bdevs_discovered": 1, 00:15:26.319 "num_base_bdevs_operational": 2, 00:15:26.319 "base_bdevs_list": [ 00:15:26.319 { 00:15:26.319 "name": "BaseBdev1", 00:15:26.319 "uuid": "80e30598-3514-490f-9662-2dae66c0aeff", 00:15:26.319 "is_configured": true, 00:15:26.319 "data_offset": 0, 00:15:26.319 "data_size": 65536 00:15:26.319 }, 00:15:26.319 { 00:15:26.319 "name": "BaseBdev2", 00:15:26.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.319 "is_configured": false, 00:15:26.319 "data_offset": 0, 00:15:26.319 "data_size": 0 00:15:26.319 } 00:15:26.319 ] 00:15:26.319 }' 00:15:26.319 05:12:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.319 05:12:45 -- common/autotest_common.sh@10 -- # set +x 00:15:26.577 05:12:45 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:26.836 [2024-07-26 05:12:45.888907] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.836 [2024-07-26 05:12:45.888978] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:26.836 05:12:45 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:26.836 05:12:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:27.095 [2024-07-26 05:12:46.089090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.095 [2024-07-26 05:12:46.091392] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.095 [2024-07-26 05:12:46.091473] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.095 05:12:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.354 05:12:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:27.354 "name": "Existed_Raid", 00:15:27.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.354 "strip_size_kb": 64, 00:15:27.354 "state": "configuring", 00:15:27.354 "raid_level": "concat", 00:15:27.354 "superblock": false, 00:15:27.354 "num_base_bdevs": 2, 00:15:27.354 "num_base_bdevs_discovered": 1, 00:15:27.354 "num_base_bdevs_operational": 2, 00:15:27.354 "base_bdevs_list": [ 00:15:27.354 { 00:15:27.354 "name": "BaseBdev1", 00:15:27.354 "uuid": "80e30598-3514-490f-9662-2dae66c0aeff", 00:15:27.354 "is_configured": true, 00:15:27.354 "data_offset": 0, 00:15:27.354 "data_size": 65536 00:15:27.354 }, 00:15:27.354 { 00:15:27.354 "name": "BaseBdev2", 00:15:27.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.354 "is_configured": false, 00:15:27.354 "data_offset": 0, 00:15:27.354 "data_size": 0 00:15:27.354 } 00:15:27.354 ] 00:15:27.354 }' 00:15:27.354 05:12:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:27.354 05:12:46 -- common/autotest_common.sh@10 -- # set +x 00:15:27.612 05:12:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:27.871 [2024-07-26 05:12:46.918102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.871 [2024-07-26 05:12:46.918176] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:15:27.871 [2024-07-26 05:12:46.918188] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:27.871 [2024-07-26 05:12:46.918318] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:15:27.871 [2024-07-26 05:12:46.918700] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:15:27.871 [2024-07-26 05:12:46.918734] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:15:27.871 [2024-07-26 05:12:46.918983] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.871 BaseBdev2 00:15:27.871 05:12:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:27.871 05:12:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:27.871 05:12:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:27.871 05:12:46 -- common/autotest_common.sh@889 -- # local i 00:15:27.871 05:12:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:27.871 05:12:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:27.871 05:12:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.130 05:12:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.389 [ 00:15:28.389 { 00:15:28.389 "name": "BaseBdev2", 00:15:28.389 "aliases": [ 00:15:28.389 "59e07e21-522c-44c9-af9c-fac09ee2c661" 00:15:28.389 ], 00:15:28.389 "product_name": "Malloc disk", 00:15:28.389 "block_size": 512, 00:15:28.389 "num_blocks": 65536, 00:15:28.389 "uuid": "59e07e21-522c-44c9-af9c-fac09ee2c661", 00:15:28.389 "assigned_rate_limits": { 00:15:28.389 "rw_ios_per_sec": 0, 00:15:28.389 "rw_mbytes_per_sec": 0, 00:15:28.389 "r_mbytes_per_sec": 0, 00:15:28.389 "w_mbytes_per_sec": 0 00:15:28.389 }, 00:15:28.389 "claimed": true, 00:15:28.389 "claim_type": "exclusive_write", 00:15:28.389 "zoned": false, 00:15:28.389 "supported_io_types": { 00:15:28.389 "read": true, 00:15:28.389 "write": true, 00:15:28.389 "unmap": true, 00:15:28.389 "write_zeroes": true, 00:15:28.389 "flush": true, 00:15:28.389 "reset": true, 00:15:28.389 "compare": false, 00:15:28.389 "compare_and_write": false, 00:15:28.389 "abort": true, 00:15:28.389 "nvme_admin": false, 00:15:28.389 "nvme_io": false 00:15:28.389 }, 00:15:28.389 "memory_domains": [ 00:15:28.389 { 00:15:28.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.389 "dma_device_type": 2 00:15:28.389 } 00:15:28.389 ], 00:15:28.389 "driver_specific": {} 00:15:28.389 } 00:15:28.389 ] 00:15:28.389 05:12:47 -- common/autotest_common.sh@895 -- # return 0 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.389 05:12:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.647 05:12:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.648 "name": "Existed_Raid", 00:15:28.648 "uuid": "b0fff747-be88-4275-a0c7-faa57ea1007e", 00:15:28.648 "strip_size_kb": 64, 00:15:28.648 "state": "online", 00:15:28.648 "raid_level": "concat", 00:15:28.648 "superblock": false, 00:15:28.648 "num_base_bdevs": 2, 00:15:28.648 "num_base_bdevs_discovered": 2, 00:15:28.648 "num_base_bdevs_operational": 2, 00:15:28.648 "base_bdevs_list": [ 00:15:28.648 { 00:15:28.648 "name": "BaseBdev1", 00:15:28.648 "uuid": "80e30598-3514-490f-9662-2dae66c0aeff", 00:15:28.648 "is_configured": true, 00:15:28.648 "data_offset": 0, 00:15:28.648 "data_size": 65536 00:15:28.648 }, 00:15:28.648 { 00:15:28.648 "name": "BaseBdev2", 00:15:28.648 "uuid": "59e07e21-522c-44c9-af9c-fac09ee2c661", 00:15:28.648 "is_configured": true, 00:15:28.648 "data_offset": 0, 00:15:28.648 "data_size": 65536 00:15:28.648 } 00:15:28.648 ] 00:15:28.648 }' 00:15:28.648 05:12:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.648 05:12:47 -- common/autotest_common.sh@10 -- # set +x 00:15:28.906 05:12:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:29.164 [2024-07-26 05:12:48.214665] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.164 [2024-07-26 05:12:48.214705] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.164 [2024-07-26 05:12:48.214784] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.422 "name": "Existed_Raid", 00:15:29.422 "uuid": "b0fff747-be88-4275-a0c7-faa57ea1007e", 00:15:29.422 "strip_size_kb": 64, 00:15:29.422 "state": "offline", 00:15:29.422 "raid_level": "concat", 00:15:29.422 "superblock": false, 00:15:29.422 "num_base_bdevs": 2, 00:15:29.422 "num_base_bdevs_discovered": 1, 00:15:29.422 "num_base_bdevs_operational": 1, 00:15:29.422 "base_bdevs_list": [ 00:15:29.422 { 00:15:29.422 "name": null, 00:15:29.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.422 "is_configured": false, 00:15:29.422 "data_offset": 0, 00:15:29.422 "data_size": 65536 00:15:29.422 }, 00:15:29.422 { 00:15:29.422 "name": "BaseBdev2", 00:15:29.422 "uuid": "59e07e21-522c-44c9-af9c-fac09ee2c661", 00:15:29.422 "is_configured": true, 00:15:29.422 "data_offset": 0, 00:15:29.422 "data_size": 65536 00:15:29.422 } 00:15:29.422 ] 00:15:29.422 }' 00:15:29.422 05:12:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.422 05:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:29.989 05:12:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:29.989 05:12:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:29.989 05:12:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.989 05:12:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:29.989 05:12:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:29.989 05:12:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.989 05:12:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:30.247 [2024-07-26 05:12:49.267732] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:30.247 [2024-07-26 05:12:49.267818] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:15:30.247 05:12:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:30.247 05:12:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:30.505 05:12:49 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.505 05:12:49 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:30.505 05:12:49 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:30.505 05:12:49 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:30.505 05:12:49 -- bdev/bdev_raid.sh@287 -- # killprocess 69484 00:15:30.505 05:12:49 -- common/autotest_common.sh@926 -- # '[' -z 69484 ']' 00:15:30.505 05:12:49 -- common/autotest_common.sh@930 -- # kill -0 69484 00:15:30.505 05:12:49 -- common/autotest_common.sh@931 -- # uname 00:15:30.505 05:12:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:30.505 05:12:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69484 00:15:30.505 05:12:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:30.505 05:12:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:30.505 05:12:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69484' 00:15:30.505 killing process with pid 69484 00:15:30.505 05:12:49 -- common/autotest_common.sh@945 -- # kill 69484 00:15:30.505 [2024-07-26 05:12:49.585614] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.505 05:12:49 -- common/autotest_common.sh@950 -- # wait 69484 00:15:30.505 [2024-07-26 05:12:49.585734] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:31.882 00:15:31.882 real 0m8.429s 00:15:31.882 user 0m13.772s 00:15:31.882 sys 0m1.251s 00:15:31.882 05:12:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.882 ************************************ 00:15:31.882 END TEST raid_state_function_test 00:15:31.882 ************************************ 00:15:31.882 05:12:50 -- common/autotest_common.sh@10 -- # set +x 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:31.882 05:12:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:31.882 05:12:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:31.882 05:12:50 -- common/autotest_common.sh@10 -- # set +x 00:15:31.882 ************************************ 00:15:31.882 START TEST raid_state_function_test_sb 00:15:31.882 ************************************ 00:15:31.882 05:12:50 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:31.882 Process raid pid: 69772 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@226 -- # raid_pid=69772 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 69772' 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@228 -- # waitforlisten 69772 /var/tmp/spdk-raid.sock 00:15:31.882 05:12:50 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:31.882 05:12:50 -- common/autotest_common.sh@819 -- # '[' -z 69772 ']' 00:15:31.882 05:12:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:31.882 05:12:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:31.882 05:12:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:31.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:31.882 05:12:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:31.882 05:12:50 -- common/autotest_common.sh@10 -- # set +x 00:15:31.882 [2024-07-26 05:12:50.730093] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:31.882 [2024-07-26 05:12:50.730248] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.882 [2024-07-26 05:12:50.900405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.142 [2024-07-26 05:12:51.072539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.142 [2024-07-26 05:12:51.226468] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.710 05:12:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:32.710 05:12:51 -- common/autotest_common.sh@852 -- # return 0 00:15:32.710 05:12:51 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:32.969 [2024-07-26 05:12:51.848906] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.969 [2024-07-26 05:12:51.848983] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.969 [2024-07-26 05:12:51.849031] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.969 [2024-07-26 05:12:51.849059] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.969 05:12:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.227 05:12:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.227 "name": "Existed_Raid", 00:15:33.227 "uuid": "ddc4da4f-cbe8-4d2c-a0c1-5dfce7a475e5", 00:15:33.227 "strip_size_kb": 64, 00:15:33.227 "state": "configuring", 00:15:33.227 "raid_level": "concat", 00:15:33.227 "superblock": true, 00:15:33.227 "num_base_bdevs": 2, 00:15:33.227 "num_base_bdevs_discovered": 0, 00:15:33.227 "num_base_bdevs_operational": 2, 00:15:33.227 "base_bdevs_list": [ 00:15:33.227 { 00:15:33.227 "name": "BaseBdev1", 00:15:33.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.227 "is_configured": false, 00:15:33.227 "data_offset": 0, 00:15:33.227 "data_size": 0 00:15:33.227 }, 00:15:33.227 { 00:15:33.227 "name": "BaseBdev2", 00:15:33.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.227 "is_configured": false, 00:15:33.227 "data_offset": 0, 00:15:33.227 "data_size": 0 00:15:33.227 } 00:15:33.227 ] 00:15:33.227 }' 00:15:33.227 05:12:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.227 05:12:52 -- common/autotest_common.sh@10 -- # set +x 00:15:33.485 05:12:52 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:33.743 [2024-07-26 05:12:52.680928] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.743 [2024-07-26 05:12:52.681180] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:33.743 05:12:52 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:34.001 [2024-07-26 05:12:52.929092] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.001 [2024-07-26 05:12:52.929158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.001 [2024-07-26 05:12:52.929180] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.001 [2024-07-26 05:12:52.929195] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.001 05:12:52 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.260 [2024-07-26 05:12:53.167540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.260 BaseBdev1 00:15:34.260 05:12:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:34.260 05:12:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:34.260 05:12:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:34.260 05:12:53 -- common/autotest_common.sh@889 -- # local i 00:15:34.260 05:12:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:34.260 05:12:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:34.260 05:12:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.518 05:12:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.518 [ 00:15:34.518 { 00:15:34.518 "name": "BaseBdev1", 00:15:34.518 "aliases": [ 00:15:34.518 "149a0bf6-185b-428a-8335-bb13624d6d0f" 00:15:34.518 ], 00:15:34.518 "product_name": "Malloc disk", 00:15:34.518 "block_size": 512, 00:15:34.518 "num_blocks": 65536, 00:15:34.518 "uuid": "149a0bf6-185b-428a-8335-bb13624d6d0f", 00:15:34.518 "assigned_rate_limits": { 00:15:34.518 "rw_ios_per_sec": 0, 00:15:34.518 "rw_mbytes_per_sec": 0, 00:15:34.518 "r_mbytes_per_sec": 0, 00:15:34.518 "w_mbytes_per_sec": 0 00:15:34.518 }, 00:15:34.518 "claimed": true, 00:15:34.518 "claim_type": "exclusive_write", 00:15:34.518 "zoned": false, 00:15:34.518 "supported_io_types": { 00:15:34.518 "read": true, 00:15:34.518 "write": true, 00:15:34.518 "unmap": true, 00:15:34.518 "write_zeroes": true, 00:15:34.518 "flush": true, 00:15:34.518 "reset": true, 00:15:34.518 "compare": false, 00:15:34.518 "compare_and_write": false, 00:15:34.518 "abort": true, 00:15:34.518 "nvme_admin": false, 00:15:34.518 "nvme_io": false 00:15:34.518 }, 00:15:34.518 "memory_domains": [ 00:15:34.518 { 00:15:34.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.518 "dma_device_type": 2 00:15:34.518 } 00:15:34.518 ], 00:15:34.518 "driver_specific": {} 00:15:34.518 } 00:15:34.518 ] 00:15:34.518 05:12:53 -- common/autotest_common.sh@895 -- # return 0 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.518 05:12:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.776 05:12:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.776 "name": "Existed_Raid", 00:15:34.776 "uuid": "dcb24a6e-9688-4d92-8169-7a96bf4c8f96", 00:15:34.776 "strip_size_kb": 64, 00:15:34.776 "state": "configuring", 00:15:34.776 "raid_level": "concat", 00:15:34.776 "superblock": true, 00:15:34.776 "num_base_bdevs": 2, 00:15:34.776 "num_base_bdevs_discovered": 1, 00:15:34.776 "num_base_bdevs_operational": 2, 00:15:34.776 "base_bdevs_list": [ 00:15:34.776 { 00:15:34.776 "name": "BaseBdev1", 00:15:34.776 "uuid": "149a0bf6-185b-428a-8335-bb13624d6d0f", 00:15:34.776 "is_configured": true, 00:15:34.776 "data_offset": 2048, 00:15:34.776 "data_size": 63488 00:15:34.776 }, 00:15:34.776 { 00:15:34.776 "name": "BaseBdev2", 00:15:34.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.776 "is_configured": false, 00:15:34.776 "data_offset": 0, 00:15:34.776 "data_size": 0 00:15:34.776 } 00:15:34.776 ] 00:15:34.776 }' 00:15:34.776 05:12:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.776 05:12:53 -- common/autotest_common.sh@10 -- # set +x 00:15:35.034 05:12:54 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:35.292 [2024-07-26 05:12:54.263837] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.292 [2024-07-26 05:12:54.263892] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:35.292 05:12:54 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:35.292 05:12:54 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:35.550 05:12:54 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:35.808 BaseBdev1 00:15:35.808 05:12:54 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:35.808 05:12:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:35.808 05:12:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:35.808 05:12:54 -- common/autotest_common.sh@889 -- # local i 00:15:35.808 05:12:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:35.808 05:12:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:35.808 05:12:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.066 05:12:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.325 [ 00:15:36.325 { 00:15:36.325 "name": "BaseBdev1", 00:15:36.325 "aliases": [ 00:15:36.325 "68218aea-beb3-4328-8bf6-d380feaddc94" 00:15:36.325 ], 00:15:36.325 "product_name": "Malloc disk", 00:15:36.325 "block_size": 512, 00:15:36.325 "num_blocks": 65536, 00:15:36.325 "uuid": "68218aea-beb3-4328-8bf6-d380feaddc94", 00:15:36.325 "assigned_rate_limits": { 00:15:36.325 "rw_ios_per_sec": 0, 00:15:36.325 "rw_mbytes_per_sec": 0, 00:15:36.325 "r_mbytes_per_sec": 0, 00:15:36.325 "w_mbytes_per_sec": 0 00:15:36.325 }, 00:15:36.325 "claimed": false, 00:15:36.325 "zoned": false, 00:15:36.325 "supported_io_types": { 00:15:36.325 "read": true, 00:15:36.325 "write": true, 00:15:36.325 "unmap": true, 00:15:36.325 "write_zeroes": true, 00:15:36.325 "flush": true, 00:15:36.325 "reset": true, 00:15:36.325 "compare": false, 00:15:36.325 "compare_and_write": false, 00:15:36.325 "abort": true, 00:15:36.325 "nvme_admin": false, 00:15:36.325 "nvme_io": false 00:15:36.325 }, 00:15:36.325 "memory_domains": [ 00:15:36.325 { 00:15:36.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.325 "dma_device_type": 2 00:15:36.325 } 00:15:36.325 ], 00:15:36.325 "driver_specific": {} 00:15:36.325 } 00:15:36.325 ] 00:15:36.325 05:12:55 -- common/autotest_common.sh@895 -- # return 0 00:15:36.325 05:12:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:36.584 [2024-07-26 05:12:55.533196] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.584 [2024-07-26 05:12:55.535160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.584 [2024-07-26 05:12:55.535210] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.584 05:12:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.843 05:12:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.843 "name": "Existed_Raid", 00:15:36.843 "uuid": "9fd8634f-e0ba-4d56-b847-9a0e725d5475", 00:15:36.843 "strip_size_kb": 64, 00:15:36.843 "state": "configuring", 00:15:36.843 "raid_level": "concat", 00:15:36.843 "superblock": true, 00:15:36.843 "num_base_bdevs": 2, 00:15:36.843 "num_base_bdevs_discovered": 1, 00:15:36.843 "num_base_bdevs_operational": 2, 00:15:36.843 "base_bdevs_list": [ 00:15:36.843 { 00:15:36.843 "name": "BaseBdev1", 00:15:36.843 "uuid": "68218aea-beb3-4328-8bf6-d380feaddc94", 00:15:36.843 "is_configured": true, 00:15:36.843 "data_offset": 2048, 00:15:36.843 "data_size": 63488 00:15:36.843 }, 00:15:36.843 { 00:15:36.843 "name": "BaseBdev2", 00:15:36.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.843 "is_configured": false, 00:15:36.843 "data_offset": 0, 00:15:36.843 "data_size": 0 00:15:36.843 } 00:15:36.843 ] 00:15:36.843 }' 00:15:36.843 05:12:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.843 05:12:55 -- common/autotest_common.sh@10 -- # set +x 00:15:37.101 05:12:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.359 [2024-07-26 05:12:56.314698] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.359 [2024-07-26 05:12:56.314981] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:15:37.359 [2024-07-26 05:12:56.315023] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:37.359 [2024-07-26 05:12:56.315157] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:37.359 BaseBdev2 00:15:37.359 [2024-07-26 05:12:56.315532] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:15:37.359 [2024-07-26 05:12:56.315562] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:15:37.359 [2024-07-26 05:12:56.315716] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.359 05:12:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:37.359 05:12:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:37.359 05:12:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:37.359 05:12:56 -- common/autotest_common.sh@889 -- # local i 00:15:37.359 05:12:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:37.359 05:12:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:37.359 05:12:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:37.618 05:12:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.876 [ 00:15:37.876 { 00:15:37.876 "name": "BaseBdev2", 00:15:37.876 "aliases": [ 00:15:37.876 "0f50e87b-8173-4fa2-beb2-46c2ec010161" 00:15:37.876 ], 00:15:37.876 "product_name": "Malloc disk", 00:15:37.876 "block_size": 512, 00:15:37.876 "num_blocks": 65536, 00:15:37.876 "uuid": "0f50e87b-8173-4fa2-beb2-46c2ec010161", 00:15:37.876 "assigned_rate_limits": { 00:15:37.876 "rw_ios_per_sec": 0, 00:15:37.876 "rw_mbytes_per_sec": 0, 00:15:37.876 "r_mbytes_per_sec": 0, 00:15:37.876 "w_mbytes_per_sec": 0 00:15:37.876 }, 00:15:37.876 "claimed": true, 00:15:37.876 "claim_type": "exclusive_write", 00:15:37.876 "zoned": false, 00:15:37.876 "supported_io_types": { 00:15:37.876 "read": true, 00:15:37.876 "write": true, 00:15:37.876 "unmap": true, 00:15:37.876 "write_zeroes": true, 00:15:37.876 "flush": true, 00:15:37.876 "reset": true, 00:15:37.876 "compare": false, 00:15:37.876 "compare_and_write": false, 00:15:37.876 "abort": true, 00:15:37.876 "nvme_admin": false, 00:15:37.876 "nvme_io": false 00:15:37.876 }, 00:15:37.876 "memory_domains": [ 00:15:37.876 { 00:15:37.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.876 "dma_device_type": 2 00:15:37.876 } 00:15:37.876 ], 00:15:37.876 "driver_specific": {} 00:15:37.876 } 00:15:37.876 ] 00:15:37.876 05:12:56 -- common/autotest_common.sh@895 -- # return 0 00:15:37.876 05:12:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:37.876 05:12:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.877 "name": "Existed_Raid", 00:15:37.877 "uuid": "9fd8634f-e0ba-4d56-b847-9a0e725d5475", 00:15:37.877 "strip_size_kb": 64, 00:15:37.877 "state": "online", 00:15:37.877 "raid_level": "concat", 00:15:37.877 "superblock": true, 00:15:37.877 "num_base_bdevs": 2, 00:15:37.877 "num_base_bdevs_discovered": 2, 00:15:37.877 "num_base_bdevs_operational": 2, 00:15:37.877 "base_bdevs_list": [ 00:15:37.877 { 00:15:37.877 "name": "BaseBdev1", 00:15:37.877 "uuid": "68218aea-beb3-4328-8bf6-d380feaddc94", 00:15:37.877 "is_configured": true, 00:15:37.877 "data_offset": 2048, 00:15:37.877 "data_size": 63488 00:15:37.877 }, 00:15:37.877 { 00:15:37.877 "name": "BaseBdev2", 00:15:37.877 "uuid": "0f50e87b-8173-4fa2-beb2-46c2ec010161", 00:15:37.877 "is_configured": true, 00:15:37.877 "data_offset": 2048, 00:15:37.877 "data_size": 63488 00:15:37.877 } 00:15:37.877 ] 00:15:37.877 }' 00:15:37.877 05:12:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.877 05:12:56 -- common/autotest_common.sh@10 -- # set +x 00:15:38.135 05:12:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:38.394 [2024-07-26 05:12:57.459135] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.394 [2024-07-26 05:12:57.459365] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.395 [2024-07-26 05:12:57.459565] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.653 05:12:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.911 05:12:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.911 "name": "Existed_Raid", 00:15:38.911 "uuid": "9fd8634f-e0ba-4d56-b847-9a0e725d5475", 00:15:38.911 "strip_size_kb": 64, 00:15:38.911 "state": "offline", 00:15:38.911 "raid_level": "concat", 00:15:38.911 "superblock": true, 00:15:38.911 "num_base_bdevs": 2, 00:15:38.911 "num_base_bdevs_discovered": 1, 00:15:38.911 "num_base_bdevs_operational": 1, 00:15:38.911 "base_bdevs_list": [ 00:15:38.911 { 00:15:38.911 "name": null, 00:15:38.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.911 "is_configured": false, 00:15:38.911 "data_offset": 2048, 00:15:38.911 "data_size": 63488 00:15:38.911 }, 00:15:38.911 { 00:15:38.911 "name": "BaseBdev2", 00:15:38.911 "uuid": "0f50e87b-8173-4fa2-beb2-46c2ec010161", 00:15:38.911 "is_configured": true, 00:15:38.911 "data_offset": 2048, 00:15:38.911 "data_size": 63488 00:15:38.911 } 00:15:38.911 ] 00:15:38.911 }' 00:15:38.911 05:12:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.911 05:12:57 -- common/autotest_common.sh@10 -- # set +x 00:15:39.169 05:12:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:39.169 05:12:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:39.169 05:12:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.169 05:12:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:39.428 05:12:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:39.428 05:12:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:39.428 05:12:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:39.728 [2024-07-26 05:12:58.596057] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:39.728 [2024-07-26 05:12:58.596137] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:15:39.728 05:12:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:39.728 05:12:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:39.728 05:12:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.728 05:12:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:39.987 05:12:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:39.987 05:12:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:39.987 05:12:58 -- bdev/bdev_raid.sh@287 -- # killprocess 69772 00:15:39.987 05:12:58 -- common/autotest_common.sh@926 -- # '[' -z 69772 ']' 00:15:39.987 05:12:58 -- common/autotest_common.sh@930 -- # kill -0 69772 00:15:39.987 05:12:58 -- common/autotest_common.sh@931 -- # uname 00:15:39.987 05:12:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:39.987 05:12:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69772 00:15:39.987 killing process with pid 69772 00:15:39.987 05:12:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:39.987 05:12:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:39.987 05:12:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69772' 00:15:39.987 05:12:58 -- common/autotest_common.sh@945 -- # kill 69772 00:15:39.987 [2024-07-26 05:12:58.994872] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.987 05:12:58 -- common/autotest_common.sh@950 -- # wait 69772 00:15:39.987 [2024-07-26 05:12:58.994980] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.924 ************************************ 00:15:40.924 END TEST raid_state_function_test_sb 00:15:40.924 ************************************ 00:15:40.924 05:12:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:40.924 00:15:40.924 real 0m9.331s 00:15:40.924 user 0m15.347s 00:15:40.924 sys 0m1.372s 00:15:40.924 05:12:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.924 05:12:59 -- common/autotest_common.sh@10 -- # set +x 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:41.184 05:13:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:41.184 05:13:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:41.184 05:13:00 -- common/autotest_common.sh@10 -- # set +x 00:15:41.184 ************************************ 00:15:41.184 START TEST raid_superblock_test 00:15:41.184 ************************************ 00:15:41.184 05:13:00 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@357 -- # raid_pid=70071 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@358 -- # waitforlisten 70071 /var/tmp/spdk-raid.sock 00:15:41.184 05:13:00 -- common/autotest_common.sh@819 -- # '[' -z 70071 ']' 00:15:41.184 05:13:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:41.184 05:13:00 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:41.184 05:13:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:41.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:41.184 05:13:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:41.184 05:13:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:41.184 05:13:00 -- common/autotest_common.sh@10 -- # set +x 00:15:41.184 [2024-07-26 05:13:00.107423] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:41.184 [2024-07-26 05:13:00.107590] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70071 ] 00:15:41.184 [2024-07-26 05:13:00.278441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.444 [2024-07-26 05:13:00.445953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.702 [2024-07-26 05:13:00.609486] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.962 05:13:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:41.962 05:13:01 -- common/autotest_common.sh@852 -- # return 0 00:15:41.962 05:13:01 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:41.962 05:13:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:41.962 05:13:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:41.962 05:13:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:41.962 05:13:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:41.962 05:13:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:41.962 05:13:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:41.962 05:13:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:41.962 05:13:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:42.219 malloc1 00:15:42.219 05:13:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:42.485 [2024-07-26 05:13:01.512824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:42.485 [2024-07-26 05:13:01.512935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.485 [2024-07-26 05:13:01.512975] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:15:42.485 [2024-07-26 05:13:01.512990] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.485 [2024-07-26 05:13:01.515846] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.485 [2024-07-26 05:13:01.515923] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:42.485 pt1 00:15:42.485 05:13:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:42.485 05:13:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:42.485 05:13:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:42.485 05:13:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:42.485 05:13:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:42.485 05:13:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.485 05:13:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.485 05:13:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.485 05:13:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:42.748 malloc2 00:15:42.748 05:13:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.005 [2024-07-26 05:13:01.945655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.005 [2024-07-26 05:13:01.945759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.005 [2024-07-26 05:13:01.945790] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:15:43.005 [2024-07-26 05:13:01.945804] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.005 [2024-07-26 05:13:01.948214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.005 [2024-07-26 05:13:01.948271] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.005 pt2 00:15:43.005 05:13:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:43.005 05:13:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:43.005 05:13:01 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:43.264 [2024-07-26 05:13:02.141693] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:43.264 [2024-07-26 05:13:02.143935] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.264 [2024-07-26 05:13:02.144164] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:15:43.264 [2024-07-26 05:13:02.144189] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:43.264 [2024-07-26 05:13:02.144354] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:15:43.264 [2024-07-26 05:13:02.144775] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:15:43.264 [2024-07-26 05:13:02.144812] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:15:43.264 [2024-07-26 05:13:02.145006] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.264 "name": "raid_bdev1", 00:15:43.264 "uuid": "2a0c25be-22d4-485b-a048-15fbd5a5b47c", 00:15:43.264 "strip_size_kb": 64, 00:15:43.264 "state": "online", 00:15:43.264 "raid_level": "concat", 00:15:43.264 "superblock": true, 00:15:43.264 "num_base_bdevs": 2, 00:15:43.264 "num_base_bdevs_discovered": 2, 00:15:43.264 "num_base_bdevs_operational": 2, 00:15:43.264 "base_bdevs_list": [ 00:15:43.264 { 00:15:43.264 "name": "pt1", 00:15:43.264 "uuid": "fbc4f19c-296b-58db-aeff-b0beb1100297", 00:15:43.264 "is_configured": true, 00:15:43.264 "data_offset": 2048, 00:15:43.264 "data_size": 63488 00:15:43.264 }, 00:15:43.264 { 00:15:43.264 "name": "pt2", 00:15:43.264 "uuid": "9efb9028-9a02-508e-8851-c2df89345564", 00:15:43.264 "is_configured": true, 00:15:43.264 "data_offset": 2048, 00:15:43.264 "data_size": 63488 00:15:43.264 } 00:15:43.264 ] 00:15:43.264 }' 00:15:43.264 05:13:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.264 05:13:02 -- common/autotest_common.sh@10 -- # set +x 00:15:43.831 05:13:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:43.831 05:13:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:43.831 [2024-07-26 05:13:02.922067] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.092 05:13:02 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=2a0c25be-22d4-485b-a048-15fbd5a5b47c 00:15:44.092 05:13:02 -- bdev/bdev_raid.sh@380 -- # '[' -z 2a0c25be-22d4-485b-a048-15fbd5a5b47c ']' 00:15:44.092 05:13:02 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:44.092 [2024-07-26 05:13:03.117840] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.092 [2024-07-26 05:13:03.117892] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.092 [2024-07-26 05:13:03.118013] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.092 [2024-07-26 05:13:03.118089] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.092 [2024-07-26 05:13:03.118112] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:15:44.092 05:13:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:44.092 05:13:03 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.351 05:13:03 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:44.351 05:13:03 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:44.351 05:13:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.351 05:13:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:44.610 05:13:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.610 05:13:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:44.868 05:13:03 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:44.868 05:13:03 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:45.127 05:13:04 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:45.127 05:13:04 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:45.127 05:13:04 -- common/autotest_common.sh@640 -- # local es=0 00:15:45.127 05:13:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:45.127 05:13:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.127 05:13:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.127 05:13:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.127 05:13:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.127 05:13:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.127 05:13:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.127 05:13:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.127 05:13:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:45.127 05:13:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:45.385 [2024-07-26 05:13:04.246306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:45.385 [2024-07-26 05:13:04.248508] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:45.385 [2024-07-26 05:13:04.248618] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:45.385 [2024-07-26 05:13:04.248700] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:45.385 [2024-07-26 05:13:04.248727] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.385 [2024-07-26 05:13:04.248739] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:15:45.385 request: 00:15:45.385 { 00:15:45.385 "name": "raid_bdev1", 00:15:45.385 "raid_level": "concat", 00:15:45.385 "base_bdevs": [ 00:15:45.385 "malloc1", 00:15:45.385 "malloc2" 00:15:45.385 ], 00:15:45.385 "superblock": false, 00:15:45.385 "strip_size_kb": 64, 00:15:45.385 "method": "bdev_raid_create", 00:15:45.385 "req_id": 1 00:15:45.385 } 00:15:45.385 Got JSON-RPC error response 00:15:45.385 response: 00:15:45.385 { 00:15:45.385 "code": -17, 00:15:45.385 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:45.385 } 00:15:45.385 05:13:04 -- common/autotest_common.sh@643 -- # es=1 00:15:45.385 05:13:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:45.385 05:13:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:45.385 05:13:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:45.385 05:13:04 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.385 05:13:04 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:45.385 05:13:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:45.385 05:13:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:45.385 05:13:04 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:45.644 [2024-07-26 05:13:04.686317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:45.644 [2024-07-26 05:13:04.686423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.644 [2024-07-26 05:13:04.686453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:15:45.644 [2024-07-26 05:13:04.686467] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.644 [2024-07-26 05:13:04.688858] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.644 [2024-07-26 05:13:04.688930] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:45.644 [2024-07-26 05:13:04.689068] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:45.644 [2024-07-26 05:13:04.689128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:45.644 pt1 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.644 05:13:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.902 05:13:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.902 "name": "raid_bdev1", 00:15:45.902 "uuid": "2a0c25be-22d4-485b-a048-15fbd5a5b47c", 00:15:45.902 "strip_size_kb": 64, 00:15:45.902 "state": "configuring", 00:15:45.902 "raid_level": "concat", 00:15:45.902 "superblock": true, 00:15:45.902 "num_base_bdevs": 2, 00:15:45.902 "num_base_bdevs_discovered": 1, 00:15:45.902 "num_base_bdevs_operational": 2, 00:15:45.902 "base_bdevs_list": [ 00:15:45.902 { 00:15:45.902 "name": "pt1", 00:15:45.902 "uuid": "fbc4f19c-296b-58db-aeff-b0beb1100297", 00:15:45.902 "is_configured": true, 00:15:45.902 "data_offset": 2048, 00:15:45.902 "data_size": 63488 00:15:45.902 }, 00:15:45.902 { 00:15:45.902 "name": null, 00:15:45.902 "uuid": "9efb9028-9a02-508e-8851-c2df89345564", 00:15:45.902 "is_configured": false, 00:15:45.902 "data_offset": 2048, 00:15:45.902 "data_size": 63488 00:15:45.902 } 00:15:45.902 ] 00:15:45.902 }' 00:15:45.902 05:13:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.902 05:13:04 -- common/autotest_common.sh@10 -- # set +x 00:15:46.160 05:13:05 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:46.160 05:13:05 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:46.160 05:13:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:46.160 05:13:05 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.420 [2024-07-26 05:13:05.402516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.420 [2024-07-26 05:13:05.402625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.420 [2024-07-26 05:13:05.402664] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:15:46.420 [2024-07-26 05:13:05.402678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.420 [2024-07-26 05:13:05.403203] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.420 [2024-07-26 05:13:05.403237] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.420 [2024-07-26 05:13:05.403338] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:46.420 [2024-07-26 05:13:05.403366] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.420 [2024-07-26 05:13:05.403498] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:15:46.420 [2024-07-26 05:13:05.403512] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:46.420 [2024-07-26 05:13:05.403635] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:46.420 [2024-07-26 05:13:05.403990] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:15:46.420 [2024-07-26 05:13:05.404048] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:15:46.420 [2024-07-26 05:13:05.404221] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.420 pt2 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.420 05:13:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.692 05:13:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.692 "name": "raid_bdev1", 00:15:46.692 "uuid": "2a0c25be-22d4-485b-a048-15fbd5a5b47c", 00:15:46.692 "strip_size_kb": 64, 00:15:46.692 "state": "online", 00:15:46.692 "raid_level": "concat", 00:15:46.692 "superblock": true, 00:15:46.692 "num_base_bdevs": 2, 00:15:46.692 "num_base_bdevs_discovered": 2, 00:15:46.692 "num_base_bdevs_operational": 2, 00:15:46.692 "base_bdevs_list": [ 00:15:46.692 { 00:15:46.692 "name": "pt1", 00:15:46.692 "uuid": "fbc4f19c-296b-58db-aeff-b0beb1100297", 00:15:46.692 "is_configured": true, 00:15:46.692 "data_offset": 2048, 00:15:46.692 "data_size": 63488 00:15:46.692 }, 00:15:46.692 { 00:15:46.692 "name": "pt2", 00:15:46.692 "uuid": "9efb9028-9a02-508e-8851-c2df89345564", 00:15:46.692 "is_configured": true, 00:15:46.692 "data_offset": 2048, 00:15:46.692 "data_size": 63488 00:15:46.692 } 00:15:46.692 ] 00:15:46.692 }' 00:15:46.692 05:13:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.692 05:13:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.950 05:13:05 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:46.950 05:13:05 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:47.208 [2024-07-26 05:13:06.138959] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.208 05:13:06 -- bdev/bdev_raid.sh@430 -- # '[' 2a0c25be-22d4-485b-a048-15fbd5a5b47c '!=' 2a0c25be-22d4-485b-a048-15fbd5a5b47c ']' 00:15:47.208 05:13:06 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:47.208 05:13:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:47.208 05:13:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:47.208 05:13:06 -- bdev/bdev_raid.sh@511 -- # killprocess 70071 00:15:47.208 05:13:06 -- common/autotest_common.sh@926 -- # '[' -z 70071 ']' 00:15:47.208 05:13:06 -- common/autotest_common.sh@930 -- # kill -0 70071 00:15:47.208 05:13:06 -- common/autotest_common.sh@931 -- # uname 00:15:47.208 05:13:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:47.208 05:13:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70071 00:15:47.208 05:13:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:47.208 killing process with pid 70071 00:15:47.208 05:13:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:47.208 05:13:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70071' 00:15:47.208 05:13:06 -- common/autotest_common.sh@945 -- # kill 70071 00:15:47.208 [2024-07-26 05:13:06.186046] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.208 05:13:06 -- common/autotest_common.sh@950 -- # wait 70071 00:15:47.208 [2024-07-26 05:13:06.186145] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.208 [2024-07-26 05:13:06.186201] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.208 [2024-07-26 05:13:06.186222] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:15:47.466 [2024-07-26 05:13:06.332754] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:48.402 00:15:48.402 real 0m7.358s 00:15:48.402 user 0m11.737s 00:15:48.402 sys 0m1.065s 00:15:48.402 05:13:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:48.402 05:13:07 -- common/autotest_common.sh@10 -- # set +x 00:15:48.402 ************************************ 00:15:48.402 END TEST raid_superblock_test 00:15:48.402 ************************************ 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:48.402 05:13:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:48.402 05:13:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:48.402 05:13:07 -- common/autotest_common.sh@10 -- # set +x 00:15:48.402 ************************************ 00:15:48.402 START TEST raid_state_function_test 00:15:48.402 ************************************ 00:15:48.402 05:13:07 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@226 -- # raid_pid=70289 00:15:48.402 Process raid pid: 70289 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 70289' 00:15:48.402 05:13:07 -- bdev/bdev_raid.sh@228 -- # waitforlisten 70289 /var/tmp/spdk-raid.sock 00:15:48.402 05:13:07 -- common/autotest_common.sh@819 -- # '[' -z 70289 ']' 00:15:48.402 05:13:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:48.402 05:13:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:48.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:48.402 05:13:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:48.402 05:13:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:48.402 05:13:07 -- common/autotest_common.sh@10 -- # set +x 00:15:48.661 [2024-07-26 05:13:07.531512] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:48.661 [2024-07-26 05:13:07.531702] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.661 [2024-07-26 05:13:07.717440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.919 [2024-07-26 05:13:07.890452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.178 [2024-07-26 05:13:08.068980] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.436 05:13:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:49.436 05:13:08 -- common/autotest_common.sh@852 -- # return 0 00:15:49.436 05:13:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:49.694 [2024-07-26 05:13:08.663572] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.694 [2024-07-26 05:13:08.663652] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.694 [2024-07-26 05:13:08.663667] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.694 [2024-07-26 05:13:08.663681] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.694 05:13:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.952 05:13:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.952 "name": "Existed_Raid", 00:15:49.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.952 "strip_size_kb": 0, 00:15:49.952 "state": "configuring", 00:15:49.952 "raid_level": "raid1", 00:15:49.952 "superblock": false, 00:15:49.952 "num_base_bdevs": 2, 00:15:49.952 "num_base_bdevs_discovered": 0, 00:15:49.952 "num_base_bdevs_operational": 2, 00:15:49.952 "base_bdevs_list": [ 00:15:49.952 { 00:15:49.952 "name": "BaseBdev1", 00:15:49.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.952 "is_configured": false, 00:15:49.952 "data_offset": 0, 00:15:49.952 "data_size": 0 00:15:49.952 }, 00:15:49.952 { 00:15:49.952 "name": "BaseBdev2", 00:15:49.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.952 "is_configured": false, 00:15:49.952 "data_offset": 0, 00:15:49.952 "data_size": 0 00:15:49.952 } 00:15:49.952 ] 00:15:49.952 }' 00:15:49.952 05:13:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.952 05:13:08 -- common/autotest_common.sh@10 -- # set +x 00:15:50.211 05:13:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:50.469 [2024-07-26 05:13:09.447670] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.469 [2024-07-26 05:13:09.447738] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:50.469 05:13:09 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:50.727 [2024-07-26 05:13:09.651743] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.727 [2024-07-26 05:13:09.651813] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.727 [2024-07-26 05:13:09.651834] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.727 [2024-07-26 05:13:09.651849] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.727 05:13:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:50.985 [2024-07-26 05:13:09.878162] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.985 BaseBdev1 00:15:50.985 05:13:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:50.985 05:13:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:50.985 05:13:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:50.985 05:13:09 -- common/autotest_common.sh@889 -- # local i 00:15:50.985 05:13:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:50.985 05:13:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:50.985 05:13:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:51.243 05:13:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:51.243 [ 00:15:51.243 { 00:15:51.243 "name": "BaseBdev1", 00:15:51.243 "aliases": [ 00:15:51.243 "0d99cee5-4526-494e-90d7-f1fae68819ae" 00:15:51.243 ], 00:15:51.243 "product_name": "Malloc disk", 00:15:51.243 "block_size": 512, 00:15:51.243 "num_blocks": 65536, 00:15:51.243 "uuid": "0d99cee5-4526-494e-90d7-f1fae68819ae", 00:15:51.243 "assigned_rate_limits": { 00:15:51.244 "rw_ios_per_sec": 0, 00:15:51.244 "rw_mbytes_per_sec": 0, 00:15:51.244 "r_mbytes_per_sec": 0, 00:15:51.244 "w_mbytes_per_sec": 0 00:15:51.244 }, 00:15:51.244 "claimed": true, 00:15:51.244 "claim_type": "exclusive_write", 00:15:51.244 "zoned": false, 00:15:51.244 "supported_io_types": { 00:15:51.244 "read": true, 00:15:51.244 "write": true, 00:15:51.244 "unmap": true, 00:15:51.244 "write_zeroes": true, 00:15:51.244 "flush": true, 00:15:51.244 "reset": true, 00:15:51.244 "compare": false, 00:15:51.244 "compare_and_write": false, 00:15:51.244 "abort": true, 00:15:51.244 "nvme_admin": false, 00:15:51.244 "nvme_io": false 00:15:51.244 }, 00:15:51.244 "memory_domains": [ 00:15:51.244 { 00:15:51.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.244 "dma_device_type": 2 00:15:51.244 } 00:15:51.244 ], 00:15:51.244 "driver_specific": {} 00:15:51.244 } 00:15:51.244 ] 00:15:51.502 05:13:10 -- common/autotest_common.sh@895 -- # return 0 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.502 "name": "Existed_Raid", 00:15:51.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.502 "strip_size_kb": 0, 00:15:51.502 "state": "configuring", 00:15:51.502 "raid_level": "raid1", 00:15:51.502 "superblock": false, 00:15:51.502 "num_base_bdevs": 2, 00:15:51.502 "num_base_bdevs_discovered": 1, 00:15:51.502 "num_base_bdevs_operational": 2, 00:15:51.502 "base_bdevs_list": [ 00:15:51.502 { 00:15:51.502 "name": "BaseBdev1", 00:15:51.502 "uuid": "0d99cee5-4526-494e-90d7-f1fae68819ae", 00:15:51.502 "is_configured": true, 00:15:51.502 "data_offset": 0, 00:15:51.502 "data_size": 65536 00:15:51.502 }, 00:15:51.502 { 00:15:51.502 "name": "BaseBdev2", 00:15:51.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.502 "is_configured": false, 00:15:51.502 "data_offset": 0, 00:15:51.502 "data_size": 0 00:15:51.502 } 00:15:51.502 ] 00:15:51.502 }' 00:15:51.502 05:13:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.502 05:13:10 -- common/autotest_common.sh@10 -- # set +x 00:15:52.069 05:13:10 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:52.069 [2024-07-26 05:13:11.090623] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.069 [2024-07-26 05:13:11.090691] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:52.069 05:13:11 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:52.069 05:13:11 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:52.328 [2024-07-26 05:13:11.342731] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.328 [2024-07-26 05:13:11.344764] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.328 [2024-07-26 05:13:11.344813] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.328 05:13:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.587 05:13:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.587 "name": "Existed_Raid", 00:15:52.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.587 "strip_size_kb": 0, 00:15:52.587 "state": "configuring", 00:15:52.587 "raid_level": "raid1", 00:15:52.587 "superblock": false, 00:15:52.587 "num_base_bdevs": 2, 00:15:52.587 "num_base_bdevs_discovered": 1, 00:15:52.587 "num_base_bdevs_operational": 2, 00:15:52.587 "base_bdevs_list": [ 00:15:52.587 { 00:15:52.587 "name": "BaseBdev1", 00:15:52.587 "uuid": "0d99cee5-4526-494e-90d7-f1fae68819ae", 00:15:52.587 "is_configured": true, 00:15:52.587 "data_offset": 0, 00:15:52.587 "data_size": 65536 00:15:52.587 }, 00:15:52.587 { 00:15:52.587 "name": "BaseBdev2", 00:15:52.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.587 "is_configured": false, 00:15:52.587 "data_offset": 0, 00:15:52.587 "data_size": 0 00:15:52.587 } 00:15:52.587 ] 00:15:52.587 }' 00:15:52.587 05:13:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.587 05:13:11 -- common/autotest_common.sh@10 -- # set +x 00:15:52.845 05:13:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:53.106 [2024-07-26 05:13:12.178452] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.106 [2024-07-26 05:13:12.178529] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:15:53.106 [2024-07-26 05:13:12.178541] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:53.106 [2024-07-26 05:13:12.178660] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:15:53.106 [2024-07-26 05:13:12.179007] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:15:53.106 [2024-07-26 05:13:12.179051] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:15:53.106 [2024-07-26 05:13:12.179372] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.106 BaseBdev2 00:15:53.106 05:13:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:53.106 05:13:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:53.107 05:13:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:53.107 05:13:12 -- common/autotest_common.sh@889 -- # local i 00:15:53.107 05:13:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:53.107 05:13:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:53.107 05:13:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.369 05:13:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.629 [ 00:15:53.629 { 00:15:53.629 "name": "BaseBdev2", 00:15:53.629 "aliases": [ 00:15:53.629 "48a4e27f-9e9a-42a7-b84e-541d725800b3" 00:15:53.629 ], 00:15:53.629 "product_name": "Malloc disk", 00:15:53.629 "block_size": 512, 00:15:53.629 "num_blocks": 65536, 00:15:53.629 "uuid": "48a4e27f-9e9a-42a7-b84e-541d725800b3", 00:15:53.629 "assigned_rate_limits": { 00:15:53.629 "rw_ios_per_sec": 0, 00:15:53.629 "rw_mbytes_per_sec": 0, 00:15:53.629 "r_mbytes_per_sec": 0, 00:15:53.629 "w_mbytes_per_sec": 0 00:15:53.629 }, 00:15:53.629 "claimed": true, 00:15:53.629 "claim_type": "exclusive_write", 00:15:53.629 "zoned": false, 00:15:53.629 "supported_io_types": { 00:15:53.629 "read": true, 00:15:53.629 "write": true, 00:15:53.629 "unmap": true, 00:15:53.629 "write_zeroes": true, 00:15:53.629 "flush": true, 00:15:53.629 "reset": true, 00:15:53.629 "compare": false, 00:15:53.629 "compare_and_write": false, 00:15:53.629 "abort": true, 00:15:53.629 "nvme_admin": false, 00:15:53.629 "nvme_io": false 00:15:53.629 }, 00:15:53.629 "memory_domains": [ 00:15:53.629 { 00:15:53.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.629 "dma_device_type": 2 00:15:53.629 } 00:15:53.629 ], 00:15:53.629 "driver_specific": {} 00:15:53.629 } 00:15:53.629 ] 00:15:53.629 05:13:12 -- common/autotest_common.sh@895 -- # return 0 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.629 05:13:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.887 05:13:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.887 "name": "Existed_Raid", 00:15:53.887 "uuid": "174e1e7a-2367-48d5-a039-12552993b526", 00:15:53.887 "strip_size_kb": 0, 00:15:53.887 "state": "online", 00:15:53.887 "raid_level": "raid1", 00:15:53.887 "superblock": false, 00:15:53.887 "num_base_bdevs": 2, 00:15:53.887 "num_base_bdevs_discovered": 2, 00:15:53.887 "num_base_bdevs_operational": 2, 00:15:53.887 "base_bdevs_list": [ 00:15:53.887 { 00:15:53.887 "name": "BaseBdev1", 00:15:53.887 "uuid": "0d99cee5-4526-494e-90d7-f1fae68819ae", 00:15:53.887 "is_configured": true, 00:15:53.887 "data_offset": 0, 00:15:53.887 "data_size": 65536 00:15:53.887 }, 00:15:53.887 { 00:15:53.887 "name": "BaseBdev2", 00:15:53.887 "uuid": "48a4e27f-9e9a-42a7-b84e-541d725800b3", 00:15:53.888 "is_configured": true, 00:15:53.888 "data_offset": 0, 00:15:53.888 "data_size": 65536 00:15:53.888 } 00:15:53.888 ] 00:15:53.888 }' 00:15:53.888 05:13:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.888 05:13:12 -- common/autotest_common.sh@10 -- # set +x 00:15:54.146 05:13:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:54.403 [2024-07-26 05:13:13.374835] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.403 05:13:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:54.403 05:13:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:54.403 05:13:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:54.403 05:13:13 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:54.403 05:13:13 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:54.403 05:13:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:54.403 05:13:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:54.404 05:13:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:54.404 05:13:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:54.404 05:13:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:54.404 05:13:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:54.404 05:13:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.404 05:13:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.404 05:13:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.404 05:13:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.404 05:13:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.404 05:13:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.660 05:13:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.660 "name": "Existed_Raid", 00:15:54.660 "uuid": "174e1e7a-2367-48d5-a039-12552993b526", 00:15:54.660 "strip_size_kb": 0, 00:15:54.660 "state": "online", 00:15:54.660 "raid_level": "raid1", 00:15:54.660 "superblock": false, 00:15:54.660 "num_base_bdevs": 2, 00:15:54.660 "num_base_bdevs_discovered": 1, 00:15:54.660 "num_base_bdevs_operational": 1, 00:15:54.660 "base_bdevs_list": [ 00:15:54.660 { 00:15:54.660 "name": null, 00:15:54.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.660 "is_configured": false, 00:15:54.660 "data_offset": 0, 00:15:54.660 "data_size": 65536 00:15:54.660 }, 00:15:54.660 { 00:15:54.660 "name": "BaseBdev2", 00:15:54.660 "uuid": "48a4e27f-9e9a-42a7-b84e-541d725800b3", 00:15:54.660 "is_configured": true, 00:15:54.660 "data_offset": 0, 00:15:54.660 "data_size": 65536 00:15:54.660 } 00:15:54.660 ] 00:15:54.660 }' 00:15:54.660 05:13:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.660 05:13:13 -- common/autotest_common.sh@10 -- # set +x 00:15:54.918 05:13:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:54.918 05:13:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:54.918 05:13:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.918 05:13:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:55.175 05:13:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:55.175 05:13:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:55.175 05:13:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:55.434 [2024-07-26 05:13:14.449107] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:55.434 [2024-07-26 05:13:14.449180] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.434 [2024-07-26 05:13:14.449244] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.434 [2024-07-26 05:13:14.523483] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.434 [2024-07-26 05:13:14.523543] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:15:55.434 05:13:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:55.434 05:13:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:55.692 05:13:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.692 05:13:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:55.950 05:13:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:55.950 05:13:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:55.950 05:13:14 -- bdev/bdev_raid.sh@287 -- # killprocess 70289 00:15:55.950 05:13:14 -- common/autotest_common.sh@926 -- # '[' -z 70289 ']' 00:15:55.950 05:13:14 -- common/autotest_common.sh@930 -- # kill -0 70289 00:15:55.950 05:13:14 -- common/autotest_common.sh@931 -- # uname 00:15:55.950 05:13:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:55.950 05:13:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70289 00:15:55.950 05:13:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:55.950 killing process with pid 70289 00:15:55.950 05:13:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:55.950 05:13:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70289' 00:15:55.950 05:13:14 -- common/autotest_common.sh@945 -- # kill 70289 00:15:55.950 05:13:14 -- common/autotest_common.sh@950 -- # wait 70289 00:15:55.950 [2024-07-26 05:13:14.834535] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.950 [2024-07-26 05:13:14.834644] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.885 05:13:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:56.885 00:15:56.885 real 0m8.434s 00:15:56.885 user 0m13.743s 00:15:56.885 sys 0m1.231s 00:15:56.885 05:13:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.885 05:13:15 -- common/autotest_common.sh@10 -- # set +x 00:15:56.885 ************************************ 00:15:56.885 END TEST raid_state_function_test 00:15:56.885 ************************************ 00:15:56.885 05:13:15 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:56.885 05:13:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:56.885 05:13:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:56.885 05:13:15 -- common/autotest_common.sh@10 -- # set +x 00:15:56.885 ************************************ 00:15:56.885 START TEST raid_state_function_test_sb 00:15:56.885 ************************************ 00:15:56.886 05:13:15 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@226 -- # raid_pid=70574 00:15:56.886 Process raid pid: 70574 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 70574' 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:56.886 05:13:15 -- bdev/bdev_raid.sh@228 -- # waitforlisten 70574 /var/tmp/spdk-raid.sock 00:15:56.886 05:13:15 -- common/autotest_common.sh@819 -- # '[' -z 70574 ']' 00:15:56.886 05:13:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:56.886 05:13:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:56.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:56.886 05:13:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:56.886 05:13:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:56.886 05:13:15 -- common/autotest_common.sh@10 -- # set +x 00:15:57.144 [2024-07-26 05:13:16.009816] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:57.144 [2024-07-26 05:13:16.009991] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.144 [2024-07-26 05:13:16.181527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.403 [2024-07-26 05:13:16.349765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.403 [2024-07-26 05:13:16.512189] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.970 05:13:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:57.970 05:13:16 -- common/autotest_common.sh@852 -- # return 0 00:15:57.970 05:13:16 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:58.228 [2024-07-26 05:13:17.148657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.228 [2024-07-26 05:13:17.148950] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.228 [2024-07-26 05:13:17.149101] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.228 [2024-07-26 05:13:17.149167] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.228 05:13:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:58.228 05:13:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:58.229 05:13:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:58.229 05:13:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:58.229 05:13:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:58.229 05:13:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:58.229 05:13:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.229 05:13:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.229 05:13:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.229 05:13:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.229 05:13:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.229 05:13:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.487 05:13:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:58.487 "name": "Existed_Raid", 00:15:58.487 "uuid": "3f04d8ce-f4ba-467b-ac53-8a6c454b36d7", 00:15:58.487 "strip_size_kb": 0, 00:15:58.487 "state": "configuring", 00:15:58.487 "raid_level": "raid1", 00:15:58.487 "superblock": true, 00:15:58.487 "num_base_bdevs": 2, 00:15:58.487 "num_base_bdevs_discovered": 0, 00:15:58.487 "num_base_bdevs_operational": 2, 00:15:58.487 "base_bdevs_list": [ 00:15:58.487 { 00:15:58.487 "name": "BaseBdev1", 00:15:58.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.487 "is_configured": false, 00:15:58.487 "data_offset": 0, 00:15:58.487 "data_size": 0 00:15:58.487 }, 00:15:58.487 { 00:15:58.487 "name": "BaseBdev2", 00:15:58.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.487 "is_configured": false, 00:15:58.487 "data_offset": 0, 00:15:58.487 "data_size": 0 00:15:58.487 } 00:15:58.487 ] 00:15:58.487 }' 00:15:58.487 05:13:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:58.487 05:13:17 -- common/autotest_common.sh@10 -- # set +x 00:15:58.746 05:13:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:59.004 [2024-07-26 05:13:18.016711] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.004 [2024-07-26 05:13:18.016758] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:59.004 05:13:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:59.263 [2024-07-26 05:13:18.220825] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.263 [2024-07-26 05:13:18.220897] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.263 [2024-07-26 05:13:18.220921] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.263 [2024-07-26 05:13:18.220938] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.263 05:13:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:59.522 [2024-07-26 05:13:18.514771] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.522 BaseBdev1 00:15:59.522 05:13:18 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:59.522 05:13:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:59.522 05:13:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:59.522 05:13:18 -- common/autotest_common.sh@889 -- # local i 00:15:59.522 05:13:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:59.522 05:13:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:59.522 05:13:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:59.781 05:13:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.050 [ 00:16:00.050 { 00:16:00.050 "name": "BaseBdev1", 00:16:00.050 "aliases": [ 00:16:00.050 "867b6d57-56e7-4468-9da4-7dd8c0559728" 00:16:00.050 ], 00:16:00.050 "product_name": "Malloc disk", 00:16:00.050 "block_size": 512, 00:16:00.050 "num_blocks": 65536, 00:16:00.050 "uuid": "867b6d57-56e7-4468-9da4-7dd8c0559728", 00:16:00.050 "assigned_rate_limits": { 00:16:00.050 "rw_ios_per_sec": 0, 00:16:00.050 "rw_mbytes_per_sec": 0, 00:16:00.050 "r_mbytes_per_sec": 0, 00:16:00.050 "w_mbytes_per_sec": 0 00:16:00.050 }, 00:16:00.050 "claimed": true, 00:16:00.050 "claim_type": "exclusive_write", 00:16:00.050 "zoned": false, 00:16:00.050 "supported_io_types": { 00:16:00.050 "read": true, 00:16:00.050 "write": true, 00:16:00.050 "unmap": true, 00:16:00.050 "write_zeroes": true, 00:16:00.050 "flush": true, 00:16:00.050 "reset": true, 00:16:00.050 "compare": false, 00:16:00.050 "compare_and_write": false, 00:16:00.050 "abort": true, 00:16:00.050 "nvme_admin": false, 00:16:00.050 "nvme_io": false 00:16:00.050 }, 00:16:00.050 "memory_domains": [ 00:16:00.050 { 00:16:00.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.050 "dma_device_type": 2 00:16:00.050 } 00:16:00.050 ], 00:16:00.050 "driver_specific": {} 00:16:00.050 } 00:16:00.050 ] 00:16:00.050 05:13:18 -- common/autotest_common.sh@895 -- # return 0 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.050 05:13:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.321 05:13:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.321 "name": "Existed_Raid", 00:16:00.321 "uuid": "4d19a6f6-324f-4502-9e03-faee98ae64cf", 00:16:00.321 "strip_size_kb": 0, 00:16:00.321 "state": "configuring", 00:16:00.321 "raid_level": "raid1", 00:16:00.321 "superblock": true, 00:16:00.321 "num_base_bdevs": 2, 00:16:00.321 "num_base_bdevs_discovered": 1, 00:16:00.321 "num_base_bdevs_operational": 2, 00:16:00.321 "base_bdevs_list": [ 00:16:00.321 { 00:16:00.321 "name": "BaseBdev1", 00:16:00.321 "uuid": "867b6d57-56e7-4468-9da4-7dd8c0559728", 00:16:00.321 "is_configured": true, 00:16:00.321 "data_offset": 2048, 00:16:00.321 "data_size": 63488 00:16:00.321 }, 00:16:00.321 { 00:16:00.321 "name": "BaseBdev2", 00:16:00.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.321 "is_configured": false, 00:16:00.321 "data_offset": 0, 00:16:00.321 "data_size": 0 00:16:00.321 } 00:16:00.321 ] 00:16:00.321 }' 00:16:00.321 05:13:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.321 05:13:19 -- common/autotest_common.sh@10 -- # set +x 00:16:00.579 05:13:19 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:00.579 [2024-07-26 05:13:19.667153] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.579 [2024-07-26 05:13:19.667213] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:00.579 05:13:19 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:00.579 05:13:19 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:01.146 05:13:19 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:01.147 BaseBdev1 00:16:01.147 05:13:20 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:01.147 05:13:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:01.147 05:13:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:01.147 05:13:20 -- common/autotest_common.sh@889 -- # local i 00:16:01.147 05:13:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:01.147 05:13:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:01.147 05:13:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.405 05:13:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:01.664 [ 00:16:01.664 { 00:16:01.664 "name": "BaseBdev1", 00:16:01.664 "aliases": [ 00:16:01.664 "c0ac36f6-ba9e-43d1-896c-c8412e334c4a" 00:16:01.664 ], 00:16:01.664 "product_name": "Malloc disk", 00:16:01.664 "block_size": 512, 00:16:01.664 "num_blocks": 65536, 00:16:01.664 "uuid": "c0ac36f6-ba9e-43d1-896c-c8412e334c4a", 00:16:01.664 "assigned_rate_limits": { 00:16:01.664 "rw_ios_per_sec": 0, 00:16:01.664 "rw_mbytes_per_sec": 0, 00:16:01.664 "r_mbytes_per_sec": 0, 00:16:01.664 "w_mbytes_per_sec": 0 00:16:01.664 }, 00:16:01.664 "claimed": false, 00:16:01.664 "zoned": false, 00:16:01.664 "supported_io_types": { 00:16:01.664 "read": true, 00:16:01.664 "write": true, 00:16:01.664 "unmap": true, 00:16:01.664 "write_zeroes": true, 00:16:01.664 "flush": true, 00:16:01.664 "reset": true, 00:16:01.664 "compare": false, 00:16:01.664 "compare_and_write": false, 00:16:01.664 "abort": true, 00:16:01.664 "nvme_admin": false, 00:16:01.664 "nvme_io": false 00:16:01.664 }, 00:16:01.664 "memory_domains": [ 00:16:01.664 { 00:16:01.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.664 "dma_device_type": 2 00:16:01.664 } 00:16:01.664 ], 00:16:01.664 "driver_specific": {} 00:16:01.664 } 00:16:01.664 ] 00:16:01.664 05:13:20 -- common/autotest_common.sh@895 -- # return 0 00:16:01.664 05:13:20 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:01.923 [2024-07-26 05:13:20.858721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.923 [2024-07-26 05:13:20.861002] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.923 [2024-07-26 05:13:20.861086] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.923 05:13:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.182 05:13:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.182 "name": "Existed_Raid", 00:16:02.182 "uuid": "2a4f7dc9-67d2-4c1a-b59b-0dddd4d3dc59", 00:16:02.182 "strip_size_kb": 0, 00:16:02.182 "state": "configuring", 00:16:02.182 "raid_level": "raid1", 00:16:02.182 "superblock": true, 00:16:02.182 "num_base_bdevs": 2, 00:16:02.182 "num_base_bdevs_discovered": 1, 00:16:02.182 "num_base_bdevs_operational": 2, 00:16:02.182 "base_bdevs_list": [ 00:16:02.182 { 00:16:02.182 "name": "BaseBdev1", 00:16:02.182 "uuid": "c0ac36f6-ba9e-43d1-896c-c8412e334c4a", 00:16:02.182 "is_configured": true, 00:16:02.182 "data_offset": 2048, 00:16:02.182 "data_size": 63488 00:16:02.182 }, 00:16:02.182 { 00:16:02.182 "name": "BaseBdev2", 00:16:02.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.182 "is_configured": false, 00:16:02.182 "data_offset": 0, 00:16:02.182 "data_size": 0 00:16:02.182 } 00:16:02.182 ] 00:16:02.182 }' 00:16:02.182 05:13:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.182 05:13:21 -- common/autotest_common.sh@10 -- # set +x 00:16:02.440 05:13:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:02.699 [2024-07-26 05:13:21.648571] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.699 BaseBdev2 00:16:02.699 [2024-07-26 05:13:21.649051] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:16:02.699 [2024-07-26 05:13:21.649078] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:02.699 [2024-07-26 05:13:21.649212] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:02.699 [2024-07-26 05:13:21.649612] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:16:02.699 [2024-07-26 05:13:21.649640] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:16:02.699 [2024-07-26 05:13:21.649791] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.699 05:13:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:02.699 05:13:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:02.699 05:13:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:02.699 05:13:21 -- common/autotest_common.sh@889 -- # local i 00:16:02.699 05:13:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:02.699 05:13:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:02.699 05:13:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.957 05:13:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.216 [ 00:16:03.216 { 00:16:03.216 "name": "BaseBdev2", 00:16:03.216 "aliases": [ 00:16:03.216 "b5b7dcd8-a6b1-47b3-b043-b67089f71a4c" 00:16:03.216 ], 00:16:03.216 "product_name": "Malloc disk", 00:16:03.216 "block_size": 512, 00:16:03.216 "num_blocks": 65536, 00:16:03.216 "uuid": "b5b7dcd8-a6b1-47b3-b043-b67089f71a4c", 00:16:03.216 "assigned_rate_limits": { 00:16:03.216 "rw_ios_per_sec": 0, 00:16:03.216 "rw_mbytes_per_sec": 0, 00:16:03.216 "r_mbytes_per_sec": 0, 00:16:03.216 "w_mbytes_per_sec": 0 00:16:03.216 }, 00:16:03.216 "claimed": true, 00:16:03.216 "claim_type": "exclusive_write", 00:16:03.216 "zoned": false, 00:16:03.216 "supported_io_types": { 00:16:03.216 "read": true, 00:16:03.216 "write": true, 00:16:03.216 "unmap": true, 00:16:03.216 "write_zeroes": true, 00:16:03.216 "flush": true, 00:16:03.216 "reset": true, 00:16:03.216 "compare": false, 00:16:03.216 "compare_and_write": false, 00:16:03.216 "abort": true, 00:16:03.216 "nvme_admin": false, 00:16:03.216 "nvme_io": false 00:16:03.216 }, 00:16:03.216 "memory_domains": [ 00:16:03.216 { 00:16:03.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.216 "dma_device_type": 2 00:16:03.216 } 00:16:03.216 ], 00:16:03.216 "driver_specific": {} 00:16:03.216 } 00:16:03.216 ] 00:16:03.216 05:13:22 -- common/autotest_common.sh@895 -- # return 0 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.216 "name": "Existed_Raid", 00:16:03.216 "uuid": "2a4f7dc9-67d2-4c1a-b59b-0dddd4d3dc59", 00:16:03.216 "strip_size_kb": 0, 00:16:03.216 "state": "online", 00:16:03.216 "raid_level": "raid1", 00:16:03.216 "superblock": true, 00:16:03.216 "num_base_bdevs": 2, 00:16:03.216 "num_base_bdevs_discovered": 2, 00:16:03.216 "num_base_bdevs_operational": 2, 00:16:03.216 "base_bdevs_list": [ 00:16:03.216 { 00:16:03.216 "name": "BaseBdev1", 00:16:03.216 "uuid": "c0ac36f6-ba9e-43d1-896c-c8412e334c4a", 00:16:03.216 "is_configured": true, 00:16:03.216 "data_offset": 2048, 00:16:03.216 "data_size": 63488 00:16:03.216 }, 00:16:03.216 { 00:16:03.216 "name": "BaseBdev2", 00:16:03.216 "uuid": "b5b7dcd8-a6b1-47b3-b043-b67089f71a4c", 00:16:03.216 "is_configured": true, 00:16:03.216 "data_offset": 2048, 00:16:03.216 "data_size": 63488 00:16:03.216 } 00:16:03.216 ] 00:16:03.216 }' 00:16:03.216 05:13:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.216 05:13:22 -- common/autotest_common.sh@10 -- # set +x 00:16:03.783 05:13:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:03.783 [2024-07-26 05:13:22.868992] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.041 05:13:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.300 05:13:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.300 "name": "Existed_Raid", 00:16:04.300 "uuid": "2a4f7dc9-67d2-4c1a-b59b-0dddd4d3dc59", 00:16:04.300 "strip_size_kb": 0, 00:16:04.300 "state": "online", 00:16:04.300 "raid_level": "raid1", 00:16:04.300 "superblock": true, 00:16:04.300 "num_base_bdevs": 2, 00:16:04.300 "num_base_bdevs_discovered": 1, 00:16:04.300 "num_base_bdevs_operational": 1, 00:16:04.300 "base_bdevs_list": [ 00:16:04.300 { 00:16:04.300 "name": null, 00:16:04.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.300 "is_configured": false, 00:16:04.300 "data_offset": 2048, 00:16:04.300 "data_size": 63488 00:16:04.300 }, 00:16:04.300 { 00:16:04.300 "name": "BaseBdev2", 00:16:04.300 "uuid": "b5b7dcd8-a6b1-47b3-b043-b67089f71a4c", 00:16:04.300 "is_configured": true, 00:16:04.300 "data_offset": 2048, 00:16:04.300 "data_size": 63488 00:16:04.300 } 00:16:04.300 ] 00:16:04.300 }' 00:16:04.300 05:13:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.300 05:13:23 -- common/autotest_common.sh@10 -- # set +x 00:16:04.558 05:13:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:04.558 05:13:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:04.558 05:13:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.558 05:13:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:04.816 05:13:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:04.816 05:13:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.816 05:13:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:05.074 [2024-07-26 05:13:23.958404] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:05.074 [2024-07-26 05:13:23.958625] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.074 [2024-07-26 05:13:23.958711] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.074 [2024-07-26 05:13:24.035741] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.074 [2024-07-26 05:13:24.035783] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:16:05.074 05:13:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:05.074 05:13:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:05.074 05:13:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.074 05:13:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:05.333 05:13:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:05.333 05:13:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:05.333 05:13:24 -- bdev/bdev_raid.sh@287 -- # killprocess 70574 00:16:05.333 05:13:24 -- common/autotest_common.sh@926 -- # '[' -z 70574 ']' 00:16:05.333 05:13:24 -- common/autotest_common.sh@930 -- # kill -0 70574 00:16:05.333 05:13:24 -- common/autotest_common.sh@931 -- # uname 00:16:05.333 05:13:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.333 05:13:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70574 00:16:05.333 killing process with pid 70574 00:16:05.333 05:13:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.333 05:13:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.333 05:13:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70574' 00:16:05.333 05:13:24 -- common/autotest_common.sh@945 -- # kill 70574 00:16:05.333 [2024-07-26 05:13:24.299861] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.333 05:13:24 -- common/autotest_common.sh@950 -- # wait 70574 00:16:05.333 [2024-07-26 05:13:24.299966] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.268 05:13:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:06.268 00:16:06.268 real 0m9.383s 00:16:06.268 user 0m15.445s 00:16:06.268 sys 0m1.334s 00:16:06.268 05:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.268 ************************************ 00:16:06.268 END TEST raid_state_function_test_sb 00:16:06.268 ************************************ 00:16:06.268 05:13:25 -- common/autotest_common.sh@10 -- # set +x 00:16:06.268 05:13:25 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:06.268 05:13:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:06.268 05:13:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:06.268 05:13:25 -- common/autotest_common.sh@10 -- # set +x 00:16:06.526 ************************************ 00:16:06.526 START TEST raid_superblock_test 00:16:06.526 ************************************ 00:16:06.526 05:13:25 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:06.526 05:13:25 -- bdev/bdev_raid.sh@357 -- # raid_pid=70876 00:16:06.527 05:13:25 -- bdev/bdev_raid.sh@358 -- # waitforlisten 70876 /var/tmp/spdk-raid.sock 00:16:06.527 05:13:25 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:06.527 05:13:25 -- common/autotest_common.sh@819 -- # '[' -z 70876 ']' 00:16:06.527 05:13:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:06.527 05:13:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:06.527 05:13:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:06.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:06.527 05:13:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:06.527 05:13:25 -- common/autotest_common.sh@10 -- # set +x 00:16:06.527 [2024-07-26 05:13:25.439877] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:06.527 [2024-07-26 05:13:25.440302] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70876 ] 00:16:06.527 [2024-07-26 05:13:25.608719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.785 [2024-07-26 05:13:25.776991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.043 [2024-07-26 05:13:25.937212] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.302 05:13:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:07.302 05:13:26 -- common/autotest_common.sh@852 -- # return 0 00:16:07.302 05:13:26 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:07.302 05:13:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:07.302 05:13:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:07.302 05:13:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:07.302 05:13:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:07.302 05:13:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.302 05:13:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.302 05:13:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.302 05:13:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:07.560 malloc1 00:16:07.818 05:13:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.818 [2024-07-26 05:13:26.859353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.818 [2024-07-26 05:13:26.859464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.818 [2024-07-26 05:13:26.859502] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:16:07.818 [2024-07-26 05:13:26.859517] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.818 [2024-07-26 05:13:26.862083] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.818 [2024-07-26 05:13:26.862265] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.818 pt1 00:16:07.818 05:13:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:07.818 05:13:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:07.818 05:13:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:07.818 05:13:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:07.818 05:13:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:07.818 05:13:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.818 05:13:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.818 05:13:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.818 05:13:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:08.076 malloc2 00:16:08.076 05:13:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:08.354 [2024-07-26 05:13:27.298703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:08.354 [2024-07-26 05:13:27.298946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.354 [2024-07-26 05:13:27.298994] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:16:08.354 [2024-07-26 05:13:27.299052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.354 [2024-07-26 05:13:27.301470] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.354 [2024-07-26 05:13:27.301513] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:08.354 pt2 00:16:08.354 05:13:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:08.354 05:13:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:08.354 05:13:27 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:08.620 [2024-07-26 05:13:27.514802] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:08.620 [2024-07-26 05:13:27.517157] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.620 [2024-07-26 05:13:27.517525] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:16:08.620 [2024-07-26 05:13:27.517664] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:08.620 [2024-07-26 05:13:27.517840] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:16:08.620 [2024-07-26 05:13:27.518458] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:16:08.620 [2024-07-26 05:13:27.518639] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:16:08.620 [2024-07-26 05:13:27.519051] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.620 05:13:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.878 05:13:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.878 "name": "raid_bdev1", 00:16:08.878 "uuid": "63ec332e-f5f5-428a-927c-67722890a2ba", 00:16:08.878 "strip_size_kb": 0, 00:16:08.878 "state": "online", 00:16:08.878 "raid_level": "raid1", 00:16:08.878 "superblock": true, 00:16:08.878 "num_base_bdevs": 2, 00:16:08.878 "num_base_bdevs_discovered": 2, 00:16:08.878 "num_base_bdevs_operational": 2, 00:16:08.878 "base_bdevs_list": [ 00:16:08.878 { 00:16:08.878 "name": "pt1", 00:16:08.878 "uuid": "91ac5c02-d637-5d89-aefd-a179e356740d", 00:16:08.878 "is_configured": true, 00:16:08.878 "data_offset": 2048, 00:16:08.878 "data_size": 63488 00:16:08.878 }, 00:16:08.878 { 00:16:08.878 "name": "pt2", 00:16:08.878 "uuid": "38282a8b-0caa-5cd2-adee-c041f4a1bf34", 00:16:08.878 "is_configured": true, 00:16:08.878 "data_offset": 2048, 00:16:08.878 "data_size": 63488 00:16:08.878 } 00:16:08.878 ] 00:16:08.878 }' 00:16:08.878 05:13:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.878 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:16:09.137 05:13:28 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:09.137 05:13:28 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:09.394 [2024-07-26 05:13:28.263330] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.394 05:13:28 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=63ec332e-f5f5-428a-927c-67722890a2ba 00:16:09.394 05:13:28 -- bdev/bdev_raid.sh@380 -- # '[' -z 63ec332e-f5f5-428a-927c-67722890a2ba ']' 00:16:09.395 05:13:28 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:09.652 [2024-07-26 05:13:28.527185] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.652 [2024-07-26 05:13:28.527224] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.652 [2024-07-26 05:13:28.527305] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.652 [2024-07-26 05:13:28.527375] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.652 [2024-07-26 05:13:28.527389] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:16:09.652 05:13:28 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:09.653 05:13:28 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.911 05:13:28 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:09.911 05:13:28 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:09.911 05:13:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:09.911 05:13:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:10.169 05:13:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:10.169 05:13:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:10.426 05:13:29 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:10.426 05:13:29 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:10.427 05:13:29 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:10.427 05:13:29 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:10.427 05:13:29 -- common/autotest_common.sh@640 -- # local es=0 00:16:10.427 05:13:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:10.427 05:13:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.427 05:13:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.427 05:13:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.427 05:13:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.427 05:13:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.427 05:13:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.427 05:13:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.427 05:13:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:10.427 05:13:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:10.685 [2024-07-26 05:13:29.743550] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:10.685 [2024-07-26 05:13:29.745625] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:10.685 [2024-07-26 05:13:29.745709] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:10.685 [2024-07-26 05:13:29.745778] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:10.685 [2024-07-26 05:13:29.745807] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:10.685 [2024-07-26 05:13:29.745819] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:16:10.685 request: 00:16:10.685 { 00:16:10.685 "name": "raid_bdev1", 00:16:10.685 "raid_level": "raid1", 00:16:10.685 "base_bdevs": [ 00:16:10.685 "malloc1", 00:16:10.685 "malloc2" 00:16:10.685 ], 00:16:10.685 "superblock": false, 00:16:10.685 "method": "bdev_raid_create", 00:16:10.685 "req_id": 1 00:16:10.685 } 00:16:10.685 Got JSON-RPC error response 00:16:10.685 response: 00:16:10.685 { 00:16:10.685 "code": -17, 00:16:10.685 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:10.685 } 00:16:10.686 05:13:29 -- common/autotest_common.sh@643 -- # es=1 00:16:10.686 05:13:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:10.686 05:13:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:10.686 05:13:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:10.686 05:13:29 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.686 05:13:29 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:10.944 05:13:29 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:10.944 05:13:29 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:10.944 05:13:29 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:11.201 [2024-07-26 05:13:30.179607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:11.201 [2024-07-26 05:13:30.179923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.201 [2024-07-26 05:13:30.180013] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:16:11.201 [2024-07-26 05:13:30.180266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.201 [2024-07-26 05:13:30.182817] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.201 [2024-07-26 05:13:30.183010] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:11.201 [2024-07-26 05:13:30.183135] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:11.201 [2024-07-26 05:13:30.183195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:11.201 pt1 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.201 05:13:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.459 05:13:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.459 "name": "raid_bdev1", 00:16:11.459 "uuid": "63ec332e-f5f5-428a-927c-67722890a2ba", 00:16:11.459 "strip_size_kb": 0, 00:16:11.459 "state": "configuring", 00:16:11.459 "raid_level": "raid1", 00:16:11.459 "superblock": true, 00:16:11.459 "num_base_bdevs": 2, 00:16:11.459 "num_base_bdevs_discovered": 1, 00:16:11.459 "num_base_bdevs_operational": 2, 00:16:11.459 "base_bdevs_list": [ 00:16:11.459 { 00:16:11.459 "name": "pt1", 00:16:11.459 "uuid": "91ac5c02-d637-5d89-aefd-a179e356740d", 00:16:11.459 "is_configured": true, 00:16:11.459 "data_offset": 2048, 00:16:11.459 "data_size": 63488 00:16:11.459 }, 00:16:11.459 { 00:16:11.459 "name": null, 00:16:11.459 "uuid": "38282a8b-0caa-5cd2-adee-c041f4a1bf34", 00:16:11.459 "is_configured": false, 00:16:11.459 "data_offset": 2048, 00:16:11.459 "data_size": 63488 00:16:11.459 } 00:16:11.459 ] 00:16:11.459 }' 00:16:11.459 05:13:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.459 05:13:30 -- common/autotest_common.sh@10 -- # set +x 00:16:11.717 05:13:30 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:11.717 05:13:30 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:11.717 05:13:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:11.717 05:13:30 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:11.975 [2024-07-26 05:13:30.935759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:11.975 [2024-07-26 05:13:30.935851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.975 [2024-07-26 05:13:30.935905] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:16:11.975 [2024-07-26 05:13:30.935919] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.975 [2024-07-26 05:13:30.936471] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.975 [2024-07-26 05:13:30.936516] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:11.975 [2024-07-26 05:13:30.936613] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:11.975 [2024-07-26 05:13:30.936640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:11.975 [2024-07-26 05:13:30.936786] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:16:11.975 [2024-07-26 05:13:30.936800] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:11.975 [2024-07-26 05:13:30.937035] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:11.975 [2024-07-26 05:13:30.937371] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:16:11.975 [2024-07-26 05:13:30.937391] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:16:11.975 [2024-07-26 05:13:30.937524] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.975 pt2 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.975 05:13:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.234 05:13:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:12.234 "name": "raid_bdev1", 00:16:12.234 "uuid": "63ec332e-f5f5-428a-927c-67722890a2ba", 00:16:12.234 "strip_size_kb": 0, 00:16:12.234 "state": "online", 00:16:12.234 "raid_level": "raid1", 00:16:12.234 "superblock": true, 00:16:12.234 "num_base_bdevs": 2, 00:16:12.234 "num_base_bdevs_discovered": 2, 00:16:12.234 "num_base_bdevs_operational": 2, 00:16:12.234 "base_bdevs_list": [ 00:16:12.234 { 00:16:12.234 "name": "pt1", 00:16:12.234 "uuid": "91ac5c02-d637-5d89-aefd-a179e356740d", 00:16:12.234 "is_configured": true, 00:16:12.234 "data_offset": 2048, 00:16:12.234 "data_size": 63488 00:16:12.234 }, 00:16:12.234 { 00:16:12.234 "name": "pt2", 00:16:12.234 "uuid": "38282a8b-0caa-5cd2-adee-c041f4a1bf34", 00:16:12.234 "is_configured": true, 00:16:12.234 "data_offset": 2048, 00:16:12.234 "data_size": 63488 00:16:12.234 } 00:16:12.234 ] 00:16:12.234 }' 00:16:12.234 05:13:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:12.234 05:13:31 -- common/autotest_common.sh@10 -- # set +x 00:16:12.492 05:13:31 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:12.492 05:13:31 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:12.750 [2024-07-26 05:13:31.732214] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.750 05:13:31 -- bdev/bdev_raid.sh@430 -- # '[' 63ec332e-f5f5-428a-927c-67722890a2ba '!=' 63ec332e-f5f5-428a-927c-67722890a2ba ']' 00:16:12.750 05:13:31 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:12.750 05:13:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:12.750 05:13:31 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:12.750 05:13:31 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:13.009 [2024-07-26 05:13:31.984088] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.009 05:13:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.266 05:13:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.266 "name": "raid_bdev1", 00:16:13.266 "uuid": "63ec332e-f5f5-428a-927c-67722890a2ba", 00:16:13.266 "strip_size_kb": 0, 00:16:13.266 "state": "online", 00:16:13.266 "raid_level": "raid1", 00:16:13.266 "superblock": true, 00:16:13.266 "num_base_bdevs": 2, 00:16:13.266 "num_base_bdevs_discovered": 1, 00:16:13.266 "num_base_bdevs_operational": 1, 00:16:13.266 "base_bdevs_list": [ 00:16:13.266 { 00:16:13.266 "name": null, 00:16:13.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.266 "is_configured": false, 00:16:13.266 "data_offset": 2048, 00:16:13.266 "data_size": 63488 00:16:13.266 }, 00:16:13.266 { 00:16:13.266 "name": "pt2", 00:16:13.266 "uuid": "38282a8b-0caa-5cd2-adee-c041f4a1bf34", 00:16:13.266 "is_configured": true, 00:16:13.266 "data_offset": 2048, 00:16:13.266 "data_size": 63488 00:16:13.266 } 00:16:13.266 ] 00:16:13.266 }' 00:16:13.266 05:13:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.266 05:13:32 -- common/autotest_common.sh@10 -- # set +x 00:16:13.524 05:13:32 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:13.783 [2024-07-26 05:13:32.724398] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:13.783 [2024-07-26 05:13:32.724450] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.783 [2024-07-26 05:13:32.724582] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.783 [2024-07-26 05:13:32.724675] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.783 [2024-07-26 05:13:32.724706] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:16:13.783 05:13:32 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:13.783 05:13:32 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.041 05:13:33 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:14.042 05:13:33 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:14.042 05:13:33 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:14.042 05:13:33 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:14.042 05:13:33 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:14.299 05:13:33 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:14.299 05:13:33 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:14.299 05:13:33 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:14.299 05:13:33 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:14.299 05:13:33 -- bdev/bdev_raid.sh@462 -- # i=1 00:16:14.299 05:13:33 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:14.558 [2024-07-26 05:13:33.435387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:14.558 [2024-07-26 05:13:33.435473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.558 [2024-07-26 05:13:33.435503] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:16:14.558 [2024-07-26 05:13:33.435518] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.558 [2024-07-26 05:13:33.437871] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.558 [2024-07-26 05:13:33.437914] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:14.558 [2024-07-26 05:13:33.438075] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:14.558 [2024-07-26 05:13:33.438144] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:14.558 [2024-07-26 05:13:33.438264] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:16:14.558 [2024-07-26 05:13:33.438288] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:14.558 [2024-07-26 05:13:33.438417] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:14.558 pt2 00:16:14.558 [2024-07-26 05:13:33.438920] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:16:14.558 [2024-07-26 05:13:33.438942] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:16:14.558 [2024-07-26 05:13:33.439121] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.558 05:13:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.816 05:13:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.816 "name": "raid_bdev1", 00:16:14.816 "uuid": "63ec332e-f5f5-428a-927c-67722890a2ba", 00:16:14.816 "strip_size_kb": 0, 00:16:14.816 "state": "online", 00:16:14.816 "raid_level": "raid1", 00:16:14.816 "superblock": true, 00:16:14.816 "num_base_bdevs": 2, 00:16:14.816 "num_base_bdevs_discovered": 1, 00:16:14.816 "num_base_bdevs_operational": 1, 00:16:14.816 "base_bdevs_list": [ 00:16:14.816 { 00:16:14.816 "name": null, 00:16:14.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.816 "is_configured": false, 00:16:14.816 "data_offset": 2048, 00:16:14.816 "data_size": 63488 00:16:14.816 }, 00:16:14.816 { 00:16:14.816 "name": "pt2", 00:16:14.816 "uuid": "38282a8b-0caa-5cd2-adee-c041f4a1bf34", 00:16:14.816 "is_configured": true, 00:16:14.816 "data_offset": 2048, 00:16:14.816 "data_size": 63488 00:16:14.816 } 00:16:14.816 ] 00:16:14.816 }' 00:16:14.816 05:13:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.816 05:13:33 -- common/autotest_common.sh@10 -- # set +x 00:16:15.074 05:13:34 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:16:15.074 05:13:34 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:15.074 05:13:34 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:15.332 [2024-07-26 05:13:34.323861] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.332 05:13:34 -- bdev/bdev_raid.sh@506 -- # '[' 63ec332e-f5f5-428a-927c-67722890a2ba '!=' 63ec332e-f5f5-428a-927c-67722890a2ba ']' 00:16:15.332 05:13:34 -- bdev/bdev_raid.sh@511 -- # killprocess 70876 00:16:15.332 05:13:34 -- common/autotest_common.sh@926 -- # '[' -z 70876 ']' 00:16:15.332 05:13:34 -- common/autotest_common.sh@930 -- # kill -0 70876 00:16:15.332 05:13:34 -- common/autotest_common.sh@931 -- # uname 00:16:15.332 05:13:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:15.332 05:13:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70876 00:16:15.332 killing process with pid 70876 00:16:15.332 05:13:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:15.332 05:13:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:15.332 05:13:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70876' 00:16:15.332 05:13:34 -- common/autotest_common.sh@945 -- # kill 70876 00:16:15.332 05:13:34 -- common/autotest_common.sh@950 -- # wait 70876 00:16:15.332 [2024-07-26 05:13:34.375445] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.332 [2024-07-26 05:13:34.375540] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.332 [2024-07-26 05:13:34.375658] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.332 [2024-07-26 05:13:34.375686] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:16:15.590 [2024-07-26 05:13:34.577572] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.525 05:13:35 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:16.525 00:16:16.525 real 0m10.235s 00:16:16.525 user 0m17.015s 00:16:16.525 sys 0m1.493s 00:16:16.525 05:13:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.525 ************************************ 00:16:16.525 END TEST raid_superblock_test 00:16:16.525 ************************************ 00:16:16.525 05:13:35 -- common/autotest_common.sh@10 -- # set +x 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:16.784 05:13:35 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:16.784 05:13:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.784 05:13:35 -- common/autotest_common.sh@10 -- # set +x 00:16:16.784 ************************************ 00:16:16.784 START TEST raid_state_function_test 00:16:16.784 ************************************ 00:16:16.784 05:13:35 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:16.784 Process raid pid: 71198 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@226 -- # raid_pid=71198 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 71198' 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@228 -- # waitforlisten 71198 /var/tmp/spdk-raid.sock 00:16:16.784 05:13:35 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:16.784 05:13:35 -- common/autotest_common.sh@819 -- # '[' -z 71198 ']' 00:16:16.784 05:13:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:16.784 05:13:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.784 05:13:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:16.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:16.784 05:13:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.784 05:13:35 -- common/autotest_common.sh@10 -- # set +x 00:16:16.784 [2024-07-26 05:13:35.736201] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:16.784 [2024-07-26 05:13:35.736578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.042 [2024-07-26 05:13:35.911835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.042 [2024-07-26 05:13:36.135861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.300 [2024-07-26 05:13:36.306955] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.558 05:13:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.558 05:13:36 -- common/autotest_common.sh@852 -- # return 0 00:16:17.558 05:13:36 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:17.816 [2024-07-26 05:13:36.803742] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.816 [2024-07-26 05:13:36.803826] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.816 [2024-07-26 05:13:36.803842] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.817 [2024-07-26 05:13:36.803856] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.817 [2024-07-26 05:13:36.803865] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.817 [2024-07-26 05:13:36.803876] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.817 05:13:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.075 05:13:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.075 "name": "Existed_Raid", 00:16:18.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.075 "strip_size_kb": 64, 00:16:18.075 "state": "configuring", 00:16:18.075 "raid_level": "raid0", 00:16:18.075 "superblock": false, 00:16:18.075 "num_base_bdevs": 3, 00:16:18.075 "num_base_bdevs_discovered": 0, 00:16:18.075 "num_base_bdevs_operational": 3, 00:16:18.075 "base_bdevs_list": [ 00:16:18.075 { 00:16:18.075 "name": "BaseBdev1", 00:16:18.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.075 "is_configured": false, 00:16:18.075 "data_offset": 0, 00:16:18.075 "data_size": 0 00:16:18.075 }, 00:16:18.075 { 00:16:18.075 "name": "BaseBdev2", 00:16:18.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.075 "is_configured": false, 00:16:18.075 "data_offset": 0, 00:16:18.075 "data_size": 0 00:16:18.075 }, 00:16:18.075 { 00:16:18.075 "name": "BaseBdev3", 00:16:18.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.075 "is_configured": false, 00:16:18.075 "data_offset": 0, 00:16:18.075 "data_size": 0 00:16:18.075 } 00:16:18.075 ] 00:16:18.075 }' 00:16:18.075 05:13:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.075 05:13:37 -- common/autotest_common.sh@10 -- # set +x 00:16:18.334 05:13:37 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:18.593 [2024-07-26 05:13:37.503802] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.593 [2024-07-26 05:13:37.503845] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:18.593 05:13:37 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:18.851 [2024-07-26 05:13:37.759942] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.851 [2024-07-26 05:13:37.760030] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.851 [2024-07-26 05:13:37.760061] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.851 [2024-07-26 05:13:37.760079] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.851 [2024-07-26 05:13:37.760088] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.851 [2024-07-26 05:13:37.760100] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.851 05:13:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:19.109 [2024-07-26 05:13:38.047519] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.109 BaseBdev1 00:16:19.109 05:13:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:19.109 05:13:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:19.109 05:13:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:19.109 05:13:38 -- common/autotest_common.sh@889 -- # local i 00:16:19.109 05:13:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:19.109 05:13:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:19.109 05:13:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.367 05:13:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:19.626 [ 00:16:19.626 { 00:16:19.626 "name": "BaseBdev1", 00:16:19.626 "aliases": [ 00:16:19.626 "f996d6a0-967f-43cf-b9ea-741faea426c0" 00:16:19.626 ], 00:16:19.626 "product_name": "Malloc disk", 00:16:19.626 "block_size": 512, 00:16:19.626 "num_blocks": 65536, 00:16:19.626 "uuid": "f996d6a0-967f-43cf-b9ea-741faea426c0", 00:16:19.626 "assigned_rate_limits": { 00:16:19.626 "rw_ios_per_sec": 0, 00:16:19.626 "rw_mbytes_per_sec": 0, 00:16:19.626 "r_mbytes_per_sec": 0, 00:16:19.626 "w_mbytes_per_sec": 0 00:16:19.626 }, 00:16:19.626 "claimed": true, 00:16:19.626 "claim_type": "exclusive_write", 00:16:19.626 "zoned": false, 00:16:19.626 "supported_io_types": { 00:16:19.626 "read": true, 00:16:19.626 "write": true, 00:16:19.626 "unmap": true, 00:16:19.626 "write_zeroes": true, 00:16:19.626 "flush": true, 00:16:19.626 "reset": true, 00:16:19.626 "compare": false, 00:16:19.626 "compare_and_write": false, 00:16:19.626 "abort": true, 00:16:19.626 "nvme_admin": false, 00:16:19.626 "nvme_io": false 00:16:19.626 }, 00:16:19.626 "memory_domains": [ 00:16:19.626 { 00:16:19.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.626 "dma_device_type": 2 00:16:19.626 } 00:16:19.626 ], 00:16:19.626 "driver_specific": {} 00:16:19.626 } 00:16:19.626 ] 00:16:19.626 05:13:38 -- common/autotest_common.sh@895 -- # return 0 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.626 05:13:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.885 05:13:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.885 "name": "Existed_Raid", 00:16:19.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.885 "strip_size_kb": 64, 00:16:19.885 "state": "configuring", 00:16:19.885 "raid_level": "raid0", 00:16:19.885 "superblock": false, 00:16:19.885 "num_base_bdevs": 3, 00:16:19.885 "num_base_bdevs_discovered": 1, 00:16:19.885 "num_base_bdevs_operational": 3, 00:16:19.885 "base_bdevs_list": [ 00:16:19.885 { 00:16:19.885 "name": "BaseBdev1", 00:16:19.885 "uuid": "f996d6a0-967f-43cf-b9ea-741faea426c0", 00:16:19.885 "is_configured": true, 00:16:19.885 "data_offset": 0, 00:16:19.885 "data_size": 65536 00:16:19.885 }, 00:16:19.885 { 00:16:19.885 "name": "BaseBdev2", 00:16:19.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.885 "is_configured": false, 00:16:19.885 "data_offset": 0, 00:16:19.885 "data_size": 0 00:16:19.885 }, 00:16:19.885 { 00:16:19.885 "name": "BaseBdev3", 00:16:19.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.885 "is_configured": false, 00:16:19.885 "data_offset": 0, 00:16:19.885 "data_size": 0 00:16:19.885 } 00:16:19.885 ] 00:16:19.885 }' 00:16:19.885 05:13:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.885 05:13:38 -- common/autotest_common.sh@10 -- # set +x 00:16:20.144 05:13:39 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:20.402 [2024-07-26 05:13:39.315927] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.402 [2024-07-26 05:13:39.315990] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:20.402 05:13:39 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:20.402 05:13:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:20.661 [2024-07-26 05:13:39.532004] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.661 [2024-07-26 05:13:39.534146] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.661 [2024-07-26 05:13:39.534371] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.661 [2024-07-26 05:13:39.534414] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.661 [2024-07-26 05:13:39.534430] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.661 05:13:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.919 05:13:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.919 "name": "Existed_Raid", 00:16:20.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.919 "strip_size_kb": 64, 00:16:20.919 "state": "configuring", 00:16:20.919 "raid_level": "raid0", 00:16:20.919 "superblock": false, 00:16:20.919 "num_base_bdevs": 3, 00:16:20.919 "num_base_bdevs_discovered": 1, 00:16:20.919 "num_base_bdevs_operational": 3, 00:16:20.919 "base_bdevs_list": [ 00:16:20.919 { 00:16:20.919 "name": "BaseBdev1", 00:16:20.919 "uuid": "f996d6a0-967f-43cf-b9ea-741faea426c0", 00:16:20.919 "is_configured": true, 00:16:20.919 "data_offset": 0, 00:16:20.919 "data_size": 65536 00:16:20.919 }, 00:16:20.919 { 00:16:20.919 "name": "BaseBdev2", 00:16:20.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.919 "is_configured": false, 00:16:20.919 "data_offset": 0, 00:16:20.919 "data_size": 0 00:16:20.919 }, 00:16:20.919 { 00:16:20.919 "name": "BaseBdev3", 00:16:20.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.919 "is_configured": false, 00:16:20.919 "data_offset": 0, 00:16:20.919 "data_size": 0 00:16:20.919 } 00:16:20.919 ] 00:16:20.919 }' 00:16:20.919 05:13:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.919 05:13:39 -- common/autotest_common.sh@10 -- # set +x 00:16:21.181 05:13:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:21.440 [2024-07-26 05:13:40.404138] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.440 BaseBdev2 00:16:21.440 05:13:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:21.440 05:13:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:21.440 05:13:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:21.440 05:13:40 -- common/autotest_common.sh@889 -- # local i 00:16:21.440 05:13:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:21.440 05:13:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:21.440 05:13:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.699 05:13:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:21.957 [ 00:16:21.957 { 00:16:21.957 "name": "BaseBdev2", 00:16:21.957 "aliases": [ 00:16:21.957 "76bbfc34-05ca-4468-bb72-2606e8e68488" 00:16:21.957 ], 00:16:21.957 "product_name": "Malloc disk", 00:16:21.957 "block_size": 512, 00:16:21.957 "num_blocks": 65536, 00:16:21.957 "uuid": "76bbfc34-05ca-4468-bb72-2606e8e68488", 00:16:21.957 "assigned_rate_limits": { 00:16:21.957 "rw_ios_per_sec": 0, 00:16:21.957 "rw_mbytes_per_sec": 0, 00:16:21.957 "r_mbytes_per_sec": 0, 00:16:21.957 "w_mbytes_per_sec": 0 00:16:21.957 }, 00:16:21.957 "claimed": true, 00:16:21.957 "claim_type": "exclusive_write", 00:16:21.957 "zoned": false, 00:16:21.957 "supported_io_types": { 00:16:21.957 "read": true, 00:16:21.957 "write": true, 00:16:21.957 "unmap": true, 00:16:21.957 "write_zeroes": true, 00:16:21.957 "flush": true, 00:16:21.957 "reset": true, 00:16:21.957 "compare": false, 00:16:21.957 "compare_and_write": false, 00:16:21.957 "abort": true, 00:16:21.957 "nvme_admin": false, 00:16:21.957 "nvme_io": false 00:16:21.957 }, 00:16:21.957 "memory_domains": [ 00:16:21.957 { 00:16:21.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.957 "dma_device_type": 2 00:16:21.957 } 00:16:21.957 ], 00:16:21.957 "driver_specific": {} 00:16:21.957 } 00:16:21.957 ] 00:16:21.957 05:13:40 -- common/autotest_common.sh@895 -- # return 0 00:16:21.957 05:13:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:21.957 05:13:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:21.957 05:13:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:21.957 05:13:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:21.957 05:13:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:21.958 05:13:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:21.958 05:13:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:21.958 05:13:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:21.958 05:13:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.958 05:13:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.958 05:13:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.958 05:13:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.958 05:13:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.958 05:13:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.216 05:13:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.216 "name": "Existed_Raid", 00:16:22.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.216 "strip_size_kb": 64, 00:16:22.216 "state": "configuring", 00:16:22.216 "raid_level": "raid0", 00:16:22.216 "superblock": false, 00:16:22.216 "num_base_bdevs": 3, 00:16:22.216 "num_base_bdevs_discovered": 2, 00:16:22.216 "num_base_bdevs_operational": 3, 00:16:22.216 "base_bdevs_list": [ 00:16:22.216 { 00:16:22.216 "name": "BaseBdev1", 00:16:22.216 "uuid": "f996d6a0-967f-43cf-b9ea-741faea426c0", 00:16:22.216 "is_configured": true, 00:16:22.216 "data_offset": 0, 00:16:22.216 "data_size": 65536 00:16:22.216 }, 00:16:22.216 { 00:16:22.216 "name": "BaseBdev2", 00:16:22.216 "uuid": "76bbfc34-05ca-4468-bb72-2606e8e68488", 00:16:22.216 "is_configured": true, 00:16:22.216 "data_offset": 0, 00:16:22.216 "data_size": 65536 00:16:22.216 }, 00:16:22.216 { 00:16:22.216 "name": "BaseBdev3", 00:16:22.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.216 "is_configured": false, 00:16:22.216 "data_offset": 0, 00:16:22.216 "data_size": 0 00:16:22.216 } 00:16:22.216 ] 00:16:22.216 }' 00:16:22.216 05:13:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.216 05:13:41 -- common/autotest_common.sh@10 -- # set +x 00:16:22.474 05:13:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:22.732 [2024-07-26 05:13:41.640594] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.732 [2024-07-26 05:13:41.640857] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:16:22.732 [2024-07-26 05:13:41.640936] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:22.732 [2024-07-26 05:13:41.641214] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:22.732 [2024-07-26 05:13:41.641660] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:16:22.732 [2024-07-26 05:13:41.641829] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:16:22.732 [2024-07-26 05:13:41.642351] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.732 BaseBdev3 00:16:22.732 05:13:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:22.732 05:13:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:22.732 05:13:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:22.732 05:13:41 -- common/autotest_common.sh@889 -- # local i 00:16:22.732 05:13:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:22.732 05:13:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:22.732 05:13:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:22.990 05:13:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:22.990 [ 00:16:22.990 { 00:16:22.990 "name": "BaseBdev3", 00:16:22.990 "aliases": [ 00:16:22.990 "ed8608aa-b4f7-4f2b-a800-8b294326514e" 00:16:22.990 ], 00:16:22.990 "product_name": "Malloc disk", 00:16:22.990 "block_size": 512, 00:16:22.990 "num_blocks": 65536, 00:16:22.990 "uuid": "ed8608aa-b4f7-4f2b-a800-8b294326514e", 00:16:22.990 "assigned_rate_limits": { 00:16:22.990 "rw_ios_per_sec": 0, 00:16:22.990 "rw_mbytes_per_sec": 0, 00:16:22.991 "r_mbytes_per_sec": 0, 00:16:22.991 "w_mbytes_per_sec": 0 00:16:22.991 }, 00:16:22.991 "claimed": true, 00:16:22.991 "claim_type": "exclusive_write", 00:16:22.991 "zoned": false, 00:16:22.991 "supported_io_types": { 00:16:22.991 "read": true, 00:16:22.991 "write": true, 00:16:22.991 "unmap": true, 00:16:22.991 "write_zeroes": true, 00:16:22.991 "flush": true, 00:16:22.991 "reset": true, 00:16:22.991 "compare": false, 00:16:22.991 "compare_and_write": false, 00:16:22.991 "abort": true, 00:16:22.991 "nvme_admin": false, 00:16:22.991 "nvme_io": false 00:16:22.991 }, 00:16:22.991 "memory_domains": [ 00:16:22.991 { 00:16:22.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.991 "dma_device_type": 2 00:16:22.991 } 00:16:22.991 ], 00:16:22.991 "driver_specific": {} 00:16:22.991 } 00:16:22.991 ] 00:16:22.991 05:13:42 -- common/autotest_common.sh@895 -- # return 0 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.991 05:13:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.248 05:13:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.248 "name": "Existed_Raid", 00:16:23.248 "uuid": "0c0bc09f-4de8-4eff-87ec-b76cab2d210a", 00:16:23.248 "strip_size_kb": 64, 00:16:23.248 "state": "online", 00:16:23.248 "raid_level": "raid0", 00:16:23.248 "superblock": false, 00:16:23.248 "num_base_bdevs": 3, 00:16:23.248 "num_base_bdevs_discovered": 3, 00:16:23.248 "num_base_bdevs_operational": 3, 00:16:23.248 "base_bdevs_list": [ 00:16:23.249 { 00:16:23.249 "name": "BaseBdev1", 00:16:23.249 "uuid": "f996d6a0-967f-43cf-b9ea-741faea426c0", 00:16:23.249 "is_configured": true, 00:16:23.249 "data_offset": 0, 00:16:23.249 "data_size": 65536 00:16:23.249 }, 00:16:23.249 { 00:16:23.249 "name": "BaseBdev2", 00:16:23.249 "uuid": "76bbfc34-05ca-4468-bb72-2606e8e68488", 00:16:23.249 "is_configured": true, 00:16:23.249 "data_offset": 0, 00:16:23.249 "data_size": 65536 00:16:23.249 }, 00:16:23.249 { 00:16:23.249 "name": "BaseBdev3", 00:16:23.249 "uuid": "ed8608aa-b4f7-4f2b-a800-8b294326514e", 00:16:23.249 "is_configured": true, 00:16:23.249 "data_offset": 0, 00:16:23.249 "data_size": 65536 00:16:23.249 } 00:16:23.249 ] 00:16:23.249 }' 00:16:23.249 05:13:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.249 05:13:42 -- common/autotest_common.sh@10 -- # set +x 00:16:23.816 05:13:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:23.816 [2024-07-26 05:13:42.825067] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.816 [2024-07-26 05:13:42.825335] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.816 [2024-07-26 05:13:42.825515] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.075 05:13:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.075 05:13:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.075 "name": "Existed_Raid", 00:16:24.075 "uuid": "0c0bc09f-4de8-4eff-87ec-b76cab2d210a", 00:16:24.075 "strip_size_kb": 64, 00:16:24.075 "state": "offline", 00:16:24.075 "raid_level": "raid0", 00:16:24.075 "superblock": false, 00:16:24.075 "num_base_bdevs": 3, 00:16:24.075 "num_base_bdevs_discovered": 2, 00:16:24.075 "num_base_bdevs_operational": 2, 00:16:24.075 "base_bdevs_list": [ 00:16:24.075 { 00:16:24.075 "name": null, 00:16:24.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.075 "is_configured": false, 00:16:24.075 "data_offset": 0, 00:16:24.075 "data_size": 65536 00:16:24.075 }, 00:16:24.075 { 00:16:24.075 "name": "BaseBdev2", 00:16:24.075 "uuid": "76bbfc34-05ca-4468-bb72-2606e8e68488", 00:16:24.075 "is_configured": true, 00:16:24.075 "data_offset": 0, 00:16:24.075 "data_size": 65536 00:16:24.075 }, 00:16:24.075 { 00:16:24.075 "name": "BaseBdev3", 00:16:24.075 "uuid": "ed8608aa-b4f7-4f2b-a800-8b294326514e", 00:16:24.075 "is_configured": true, 00:16:24.075 "data_offset": 0, 00:16:24.075 "data_size": 65536 00:16:24.075 } 00:16:24.075 ] 00:16:24.075 }' 00:16:24.075 05:13:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.075 05:13:43 -- common/autotest_common.sh@10 -- # set +x 00:16:24.333 05:13:43 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:24.333 05:13:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:24.333 05:13:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.333 05:13:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:24.591 05:13:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:24.591 05:13:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:24.591 05:13:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:24.848 [2024-07-26 05:13:43.893775] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:25.106 05:13:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:25.106 05:13:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:25.106 05:13:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.106 05:13:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:25.364 05:13:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:25.364 05:13:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:25.364 05:13:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:25.364 [2024-07-26 05:13:44.462963] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:25.364 [2024-07-26 05:13:44.463042] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:16:25.623 05:13:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:25.623 05:13:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:25.623 05:13:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.623 05:13:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:25.882 05:13:44 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:25.882 05:13:44 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:25.882 05:13:44 -- bdev/bdev_raid.sh@287 -- # killprocess 71198 00:16:25.882 05:13:44 -- common/autotest_common.sh@926 -- # '[' -z 71198 ']' 00:16:25.882 05:13:44 -- common/autotest_common.sh@930 -- # kill -0 71198 00:16:25.882 05:13:44 -- common/autotest_common.sh@931 -- # uname 00:16:25.882 05:13:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:25.882 05:13:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71198 00:16:25.882 killing process with pid 71198 00:16:25.882 05:13:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:25.882 05:13:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:25.882 05:13:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71198' 00:16:25.882 05:13:44 -- common/autotest_common.sh@945 -- # kill 71198 00:16:25.882 05:13:44 -- common/autotest_common.sh@950 -- # wait 71198 00:16:25.882 [2024-07-26 05:13:44.848983] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.882 [2024-07-26 05:13:44.849352] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.816 05:13:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:26.816 00:16:26.816 real 0m10.219s 00:16:26.816 user 0m16.955s 00:16:26.816 sys 0m1.500s 00:16:26.816 05:13:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.816 05:13:45 -- common/autotest_common.sh@10 -- # set +x 00:16:26.816 ************************************ 00:16:26.816 END TEST raid_state_function_test 00:16:26.816 ************************************ 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:27.075 05:13:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:27.075 05:13:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:27.075 05:13:45 -- common/autotest_common.sh@10 -- # set +x 00:16:27.075 ************************************ 00:16:27.075 START TEST raid_state_function_test_sb 00:16:27.075 ************************************ 00:16:27.075 05:13:45 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:27.075 Process raid pid: 71538 00:16:27.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=71538 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 71538' 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 71538 /var/tmp/spdk-raid.sock 00:16:27.075 05:13:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:27.075 05:13:45 -- common/autotest_common.sh@819 -- # '[' -z 71538 ']' 00:16:27.075 05:13:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:27.075 05:13:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:27.075 05:13:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:27.075 05:13:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:27.075 05:13:45 -- common/autotest_common.sh@10 -- # set +x 00:16:27.075 [2024-07-26 05:13:46.007348] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:27.075 [2024-07-26 05:13:46.007722] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.075 [2024-07-26 05:13:46.181648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.333 [2024-07-26 05:13:46.402902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.591 [2024-07-26 05:13:46.569210] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.850 05:13:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:27.850 05:13:46 -- common/autotest_common.sh@852 -- # return 0 00:16:27.850 05:13:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:28.108 [2024-07-26 05:13:47.097282] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.108 [2024-07-26 05:13:47.097509] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.108 [2024-07-26 05:13:47.097628] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.108 [2024-07-26 05:13:47.097686] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.108 [2024-07-26 05:13:47.097786] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.108 [2024-07-26 05:13:47.097956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.108 05:13:47 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:28.108 05:13:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:28.108 05:13:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:28.108 05:13:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:28.108 05:13:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:28.108 05:13:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:28.108 05:13:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.108 05:13:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.109 05:13:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.109 05:13:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.109 05:13:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.109 05:13:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.418 05:13:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.418 "name": "Existed_Raid", 00:16:28.418 "uuid": "80c75b60-4eee-44a1-8118-76dfe4a125f5", 00:16:28.418 "strip_size_kb": 64, 00:16:28.418 "state": "configuring", 00:16:28.418 "raid_level": "raid0", 00:16:28.418 "superblock": true, 00:16:28.418 "num_base_bdevs": 3, 00:16:28.418 "num_base_bdevs_discovered": 0, 00:16:28.418 "num_base_bdevs_operational": 3, 00:16:28.418 "base_bdevs_list": [ 00:16:28.418 { 00:16:28.418 "name": "BaseBdev1", 00:16:28.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.418 "is_configured": false, 00:16:28.418 "data_offset": 0, 00:16:28.418 "data_size": 0 00:16:28.418 }, 00:16:28.418 { 00:16:28.418 "name": "BaseBdev2", 00:16:28.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.418 "is_configured": false, 00:16:28.418 "data_offset": 0, 00:16:28.418 "data_size": 0 00:16:28.418 }, 00:16:28.419 { 00:16:28.419 "name": "BaseBdev3", 00:16:28.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.419 "is_configured": false, 00:16:28.419 "data_offset": 0, 00:16:28.419 "data_size": 0 00:16:28.419 } 00:16:28.419 ] 00:16:28.419 }' 00:16:28.419 05:13:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.419 05:13:47 -- common/autotest_common.sh@10 -- # set +x 00:16:28.676 05:13:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:28.934 [2024-07-26 05:13:47.861371] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.934 [2024-07-26 05:13:47.861418] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:28.934 05:13:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:29.192 [2024-07-26 05:13:48.121508] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.192 [2024-07-26 05:13:48.121581] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.192 [2024-07-26 05:13:48.121596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.192 [2024-07-26 05:13:48.121612] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.192 [2024-07-26 05:13:48.121620] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.192 [2024-07-26 05:13:48.121633] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.192 05:13:48 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.450 [2024-07-26 05:13:48.352883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.450 BaseBdev1 00:16:29.450 05:13:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:29.450 05:13:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:29.450 05:13:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:29.450 05:13:48 -- common/autotest_common.sh@889 -- # local i 00:16:29.450 05:13:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:29.450 05:13:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:29.450 05:13:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.708 05:13:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:29.708 [ 00:16:29.708 { 00:16:29.708 "name": "BaseBdev1", 00:16:29.708 "aliases": [ 00:16:29.708 "4812aaca-8179-416b-899e-0ade487900b7" 00:16:29.708 ], 00:16:29.708 "product_name": "Malloc disk", 00:16:29.708 "block_size": 512, 00:16:29.708 "num_blocks": 65536, 00:16:29.708 "uuid": "4812aaca-8179-416b-899e-0ade487900b7", 00:16:29.708 "assigned_rate_limits": { 00:16:29.708 "rw_ios_per_sec": 0, 00:16:29.708 "rw_mbytes_per_sec": 0, 00:16:29.708 "r_mbytes_per_sec": 0, 00:16:29.708 "w_mbytes_per_sec": 0 00:16:29.708 }, 00:16:29.708 "claimed": true, 00:16:29.708 "claim_type": "exclusive_write", 00:16:29.708 "zoned": false, 00:16:29.708 "supported_io_types": { 00:16:29.708 "read": true, 00:16:29.708 "write": true, 00:16:29.708 "unmap": true, 00:16:29.708 "write_zeroes": true, 00:16:29.708 "flush": true, 00:16:29.708 "reset": true, 00:16:29.708 "compare": false, 00:16:29.708 "compare_and_write": false, 00:16:29.708 "abort": true, 00:16:29.708 "nvme_admin": false, 00:16:29.708 "nvme_io": false 00:16:29.708 }, 00:16:29.708 "memory_domains": [ 00:16:29.708 { 00:16:29.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.708 "dma_device_type": 2 00:16:29.708 } 00:16:29.708 ], 00:16:29.708 "driver_specific": {} 00:16:29.708 } 00:16:29.708 ] 00:16:29.967 05:13:48 -- common/autotest_common.sh@895 -- # return 0 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.967 05:13:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.967 05:13:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:29.967 "name": "Existed_Raid", 00:16:29.967 "uuid": "dd3215ce-e344-408f-bcd5-cc967d228973", 00:16:29.967 "strip_size_kb": 64, 00:16:29.967 "state": "configuring", 00:16:29.967 "raid_level": "raid0", 00:16:29.967 "superblock": true, 00:16:29.967 "num_base_bdevs": 3, 00:16:29.967 "num_base_bdevs_discovered": 1, 00:16:29.967 "num_base_bdevs_operational": 3, 00:16:29.967 "base_bdevs_list": [ 00:16:29.967 { 00:16:29.967 "name": "BaseBdev1", 00:16:29.967 "uuid": "4812aaca-8179-416b-899e-0ade487900b7", 00:16:29.967 "is_configured": true, 00:16:29.967 "data_offset": 2048, 00:16:29.967 "data_size": 63488 00:16:29.967 }, 00:16:29.967 { 00:16:29.967 "name": "BaseBdev2", 00:16:29.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.967 "is_configured": false, 00:16:29.967 "data_offset": 0, 00:16:29.967 "data_size": 0 00:16:29.967 }, 00:16:29.967 { 00:16:29.967 "name": "BaseBdev3", 00:16:29.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.967 "is_configured": false, 00:16:29.967 "data_offset": 0, 00:16:29.967 "data_size": 0 00:16:29.967 } 00:16:29.967 ] 00:16:29.967 }' 00:16:29.967 05:13:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:29.967 05:13:49 -- common/autotest_common.sh@10 -- # set +x 00:16:30.533 05:13:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:30.533 [2024-07-26 05:13:49.541257] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.533 [2024-07-26 05:13:49.541334] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:30.533 05:13:49 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:30.533 05:13:49 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:30.791 05:13:49 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:31.049 BaseBdev1 00:16:31.306 05:13:50 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:31.306 05:13:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:31.307 05:13:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:31.307 05:13:50 -- common/autotest_common.sh@889 -- # local i 00:16:31.307 05:13:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:31.307 05:13:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:31.307 05:13:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:31.307 05:13:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:31.568 [ 00:16:31.568 { 00:16:31.568 "name": "BaseBdev1", 00:16:31.568 "aliases": [ 00:16:31.568 "3a5483b8-307b-4ea5-ba01-0fdd4329487b" 00:16:31.568 ], 00:16:31.568 "product_name": "Malloc disk", 00:16:31.568 "block_size": 512, 00:16:31.568 "num_blocks": 65536, 00:16:31.569 "uuid": "3a5483b8-307b-4ea5-ba01-0fdd4329487b", 00:16:31.569 "assigned_rate_limits": { 00:16:31.569 "rw_ios_per_sec": 0, 00:16:31.569 "rw_mbytes_per_sec": 0, 00:16:31.569 "r_mbytes_per_sec": 0, 00:16:31.569 "w_mbytes_per_sec": 0 00:16:31.569 }, 00:16:31.569 "claimed": false, 00:16:31.569 "zoned": false, 00:16:31.569 "supported_io_types": { 00:16:31.569 "read": true, 00:16:31.569 "write": true, 00:16:31.569 "unmap": true, 00:16:31.569 "write_zeroes": true, 00:16:31.569 "flush": true, 00:16:31.569 "reset": true, 00:16:31.569 "compare": false, 00:16:31.569 "compare_and_write": false, 00:16:31.569 "abort": true, 00:16:31.569 "nvme_admin": false, 00:16:31.569 "nvme_io": false 00:16:31.569 }, 00:16:31.569 "memory_domains": [ 00:16:31.569 { 00:16:31.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.569 "dma_device_type": 2 00:16:31.569 } 00:16:31.569 ], 00:16:31.569 "driver_specific": {} 00:16:31.569 } 00:16:31.569 ] 00:16:31.569 05:13:50 -- common/autotest_common.sh@895 -- # return 0 00:16:31.569 05:13:50 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:31.832 [2024-07-26 05:13:50.784661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.832 [2024-07-26 05:13:50.786904] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.832 [2024-07-26 05:13:50.786974] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.832 [2024-07-26 05:13:50.786989] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.832 [2024-07-26 05:13:50.787004] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.832 05:13:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.090 05:13:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.090 "name": "Existed_Raid", 00:16:32.090 "uuid": "c1aa9a0c-8343-4058-8d54-1f7d35ff255f", 00:16:32.090 "strip_size_kb": 64, 00:16:32.090 "state": "configuring", 00:16:32.090 "raid_level": "raid0", 00:16:32.090 "superblock": true, 00:16:32.090 "num_base_bdevs": 3, 00:16:32.090 "num_base_bdevs_discovered": 1, 00:16:32.090 "num_base_bdevs_operational": 3, 00:16:32.090 "base_bdevs_list": [ 00:16:32.090 { 00:16:32.090 "name": "BaseBdev1", 00:16:32.090 "uuid": "3a5483b8-307b-4ea5-ba01-0fdd4329487b", 00:16:32.090 "is_configured": true, 00:16:32.090 "data_offset": 2048, 00:16:32.090 "data_size": 63488 00:16:32.090 }, 00:16:32.090 { 00:16:32.090 "name": "BaseBdev2", 00:16:32.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.090 "is_configured": false, 00:16:32.090 "data_offset": 0, 00:16:32.090 "data_size": 0 00:16:32.090 }, 00:16:32.090 { 00:16:32.090 "name": "BaseBdev3", 00:16:32.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.090 "is_configured": false, 00:16:32.090 "data_offset": 0, 00:16:32.090 "data_size": 0 00:16:32.090 } 00:16:32.090 ] 00:16:32.090 }' 00:16:32.090 05:13:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.090 05:13:51 -- common/autotest_common.sh@10 -- # set +x 00:16:32.348 05:13:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:32.607 [2024-07-26 05:13:51.690746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.607 BaseBdev2 00:16:32.607 05:13:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:32.607 05:13:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:32.607 05:13:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:32.607 05:13:51 -- common/autotest_common.sh@889 -- # local i 00:16:32.607 05:13:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:32.607 05:13:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:32.607 05:13:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:32.865 05:13:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:33.122 [ 00:16:33.122 { 00:16:33.122 "name": "BaseBdev2", 00:16:33.122 "aliases": [ 00:16:33.122 "1cf22b79-a83f-47a9-958f-bbe9a47ce635" 00:16:33.122 ], 00:16:33.122 "product_name": "Malloc disk", 00:16:33.122 "block_size": 512, 00:16:33.122 "num_blocks": 65536, 00:16:33.122 "uuid": "1cf22b79-a83f-47a9-958f-bbe9a47ce635", 00:16:33.122 "assigned_rate_limits": { 00:16:33.122 "rw_ios_per_sec": 0, 00:16:33.122 "rw_mbytes_per_sec": 0, 00:16:33.122 "r_mbytes_per_sec": 0, 00:16:33.122 "w_mbytes_per_sec": 0 00:16:33.122 }, 00:16:33.122 "claimed": true, 00:16:33.122 "claim_type": "exclusive_write", 00:16:33.122 "zoned": false, 00:16:33.122 "supported_io_types": { 00:16:33.122 "read": true, 00:16:33.122 "write": true, 00:16:33.122 "unmap": true, 00:16:33.122 "write_zeroes": true, 00:16:33.122 "flush": true, 00:16:33.122 "reset": true, 00:16:33.122 "compare": false, 00:16:33.122 "compare_and_write": false, 00:16:33.122 "abort": true, 00:16:33.122 "nvme_admin": false, 00:16:33.122 "nvme_io": false 00:16:33.122 }, 00:16:33.122 "memory_domains": [ 00:16:33.122 { 00:16:33.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.122 "dma_device_type": 2 00:16:33.122 } 00:16:33.122 ], 00:16:33.122 "driver_specific": {} 00:16:33.122 } 00:16:33.122 ] 00:16:33.122 05:13:52 -- common/autotest_common.sh@895 -- # return 0 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.122 05:13:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.379 05:13:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.379 "name": "Existed_Raid", 00:16:33.379 "uuid": "c1aa9a0c-8343-4058-8d54-1f7d35ff255f", 00:16:33.379 "strip_size_kb": 64, 00:16:33.379 "state": "configuring", 00:16:33.379 "raid_level": "raid0", 00:16:33.379 "superblock": true, 00:16:33.379 "num_base_bdevs": 3, 00:16:33.379 "num_base_bdevs_discovered": 2, 00:16:33.379 "num_base_bdevs_operational": 3, 00:16:33.379 "base_bdevs_list": [ 00:16:33.379 { 00:16:33.379 "name": "BaseBdev1", 00:16:33.379 "uuid": "3a5483b8-307b-4ea5-ba01-0fdd4329487b", 00:16:33.379 "is_configured": true, 00:16:33.379 "data_offset": 2048, 00:16:33.380 "data_size": 63488 00:16:33.380 }, 00:16:33.380 { 00:16:33.380 "name": "BaseBdev2", 00:16:33.380 "uuid": "1cf22b79-a83f-47a9-958f-bbe9a47ce635", 00:16:33.380 "is_configured": true, 00:16:33.380 "data_offset": 2048, 00:16:33.380 "data_size": 63488 00:16:33.380 }, 00:16:33.380 { 00:16:33.380 "name": "BaseBdev3", 00:16:33.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.380 "is_configured": false, 00:16:33.380 "data_offset": 0, 00:16:33.380 "data_size": 0 00:16:33.380 } 00:16:33.380 ] 00:16:33.380 }' 00:16:33.380 05:13:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.380 05:13:52 -- common/autotest_common.sh@10 -- # set +x 00:16:33.637 05:13:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:33.896 [2024-07-26 05:13:52.924744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.896 [2024-07-26 05:13:52.925266] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:16:33.896 [2024-07-26 05:13:52.925307] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:33.896 [2024-07-26 05:13:52.925474] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:33.896 [2024-07-26 05:13:52.925859] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:16:33.896 [2024-07-26 05:13:52.925875] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:16:33.896 [2024-07-26 05:13:52.926103] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.896 BaseBdev3 00:16:33.896 05:13:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:33.896 05:13:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:33.896 05:13:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:33.896 05:13:52 -- common/autotest_common.sh@889 -- # local i 00:16:33.896 05:13:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:33.896 05:13:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:33.896 05:13:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:34.154 05:13:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:34.411 [ 00:16:34.411 { 00:16:34.411 "name": "BaseBdev3", 00:16:34.412 "aliases": [ 00:16:34.412 "837df937-2281-4a14-b6bf-64160f44b5f2" 00:16:34.412 ], 00:16:34.412 "product_name": "Malloc disk", 00:16:34.412 "block_size": 512, 00:16:34.412 "num_blocks": 65536, 00:16:34.412 "uuid": "837df937-2281-4a14-b6bf-64160f44b5f2", 00:16:34.412 "assigned_rate_limits": { 00:16:34.412 "rw_ios_per_sec": 0, 00:16:34.412 "rw_mbytes_per_sec": 0, 00:16:34.412 "r_mbytes_per_sec": 0, 00:16:34.412 "w_mbytes_per_sec": 0 00:16:34.412 }, 00:16:34.412 "claimed": true, 00:16:34.412 "claim_type": "exclusive_write", 00:16:34.412 "zoned": false, 00:16:34.412 "supported_io_types": { 00:16:34.412 "read": true, 00:16:34.412 "write": true, 00:16:34.412 "unmap": true, 00:16:34.412 "write_zeroes": true, 00:16:34.412 "flush": true, 00:16:34.412 "reset": true, 00:16:34.412 "compare": false, 00:16:34.412 "compare_and_write": false, 00:16:34.412 "abort": true, 00:16:34.412 "nvme_admin": false, 00:16:34.412 "nvme_io": false 00:16:34.412 }, 00:16:34.412 "memory_domains": [ 00:16:34.412 { 00:16:34.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.412 "dma_device_type": 2 00:16:34.412 } 00:16:34.412 ], 00:16:34.412 "driver_specific": {} 00:16:34.412 } 00:16:34.412 ] 00:16:34.412 05:13:53 -- common/autotest_common.sh@895 -- # return 0 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.412 05:13:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.670 05:13:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.670 "name": "Existed_Raid", 00:16:34.670 "uuid": "c1aa9a0c-8343-4058-8d54-1f7d35ff255f", 00:16:34.670 "strip_size_kb": 64, 00:16:34.670 "state": "online", 00:16:34.670 "raid_level": "raid0", 00:16:34.670 "superblock": true, 00:16:34.670 "num_base_bdevs": 3, 00:16:34.670 "num_base_bdevs_discovered": 3, 00:16:34.670 "num_base_bdevs_operational": 3, 00:16:34.670 "base_bdevs_list": [ 00:16:34.670 { 00:16:34.670 "name": "BaseBdev1", 00:16:34.670 "uuid": "3a5483b8-307b-4ea5-ba01-0fdd4329487b", 00:16:34.670 "is_configured": true, 00:16:34.670 "data_offset": 2048, 00:16:34.670 "data_size": 63488 00:16:34.670 }, 00:16:34.670 { 00:16:34.670 "name": "BaseBdev2", 00:16:34.670 "uuid": "1cf22b79-a83f-47a9-958f-bbe9a47ce635", 00:16:34.670 "is_configured": true, 00:16:34.670 "data_offset": 2048, 00:16:34.670 "data_size": 63488 00:16:34.670 }, 00:16:34.670 { 00:16:34.670 "name": "BaseBdev3", 00:16:34.670 "uuid": "837df937-2281-4a14-b6bf-64160f44b5f2", 00:16:34.670 "is_configured": true, 00:16:34.670 "data_offset": 2048, 00:16:34.670 "data_size": 63488 00:16:34.670 } 00:16:34.670 ] 00:16:34.670 }' 00:16:34.670 05:13:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.670 05:13:53 -- common/autotest_common.sh@10 -- # set +x 00:16:34.927 05:13:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:35.185 [2024-07-26 05:13:54.089166] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.185 [2024-07-26 05:13:54.089207] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.185 [2024-07-26 05:13:54.089272] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.185 05:13:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.442 05:13:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.442 "name": "Existed_Raid", 00:16:35.442 "uuid": "c1aa9a0c-8343-4058-8d54-1f7d35ff255f", 00:16:35.442 "strip_size_kb": 64, 00:16:35.442 "state": "offline", 00:16:35.442 "raid_level": "raid0", 00:16:35.442 "superblock": true, 00:16:35.442 "num_base_bdevs": 3, 00:16:35.442 "num_base_bdevs_discovered": 2, 00:16:35.442 "num_base_bdevs_operational": 2, 00:16:35.442 "base_bdevs_list": [ 00:16:35.442 { 00:16:35.442 "name": null, 00:16:35.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.442 "is_configured": false, 00:16:35.442 "data_offset": 2048, 00:16:35.442 "data_size": 63488 00:16:35.442 }, 00:16:35.442 { 00:16:35.442 "name": "BaseBdev2", 00:16:35.442 "uuid": "1cf22b79-a83f-47a9-958f-bbe9a47ce635", 00:16:35.442 "is_configured": true, 00:16:35.442 "data_offset": 2048, 00:16:35.442 "data_size": 63488 00:16:35.442 }, 00:16:35.442 { 00:16:35.442 "name": "BaseBdev3", 00:16:35.442 "uuid": "837df937-2281-4a14-b6bf-64160f44b5f2", 00:16:35.442 "is_configured": true, 00:16:35.442 "data_offset": 2048, 00:16:35.442 "data_size": 63488 00:16:35.442 } 00:16:35.442 ] 00:16:35.442 }' 00:16:35.442 05:13:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.442 05:13:54 -- common/autotest_common.sh@10 -- # set +x 00:16:35.700 05:13:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:35.700 05:13:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:35.700 05:13:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.700 05:13:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:35.958 05:13:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:35.958 05:13:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:35.958 05:13:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:36.215 [2024-07-26 05:13:55.179348] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.215 05:13:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:36.215 05:13:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:36.215 05:13:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:36.215 05:13:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.472 05:13:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:36.472 05:13:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.472 05:13:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:36.730 [2024-07-26 05:13:55.717885] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:36.730 [2024-07-26 05:13:55.717951] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:16:36.730 05:13:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:36.730 05:13:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:36.730 05:13:55 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.730 05:13:55 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:36.988 05:13:56 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:36.988 05:13:56 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:36.988 05:13:56 -- bdev/bdev_raid.sh@287 -- # killprocess 71538 00:16:36.988 05:13:56 -- common/autotest_common.sh@926 -- # '[' -z 71538 ']' 00:16:36.988 05:13:56 -- common/autotest_common.sh@930 -- # kill -0 71538 00:16:36.988 05:13:56 -- common/autotest_common.sh@931 -- # uname 00:16:36.988 05:13:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:36.988 05:13:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71538 00:16:36.988 killing process with pid 71538 00:16:36.988 05:13:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:36.988 05:13:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:36.988 05:13:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71538' 00:16:36.988 05:13:56 -- common/autotest_common.sh@945 -- # kill 71538 00:16:36.988 [2024-07-26 05:13:56.056840] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.988 05:13:56 -- common/autotest_common.sh@950 -- # wait 71538 00:16:36.988 [2024-07-26 05:13:56.056950] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:38.370 ************************************ 00:16:38.370 END TEST raid_state_function_test_sb 00:16:38.370 ************************************ 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:38.370 00:16:38.370 real 0m11.182s 00:16:38.370 user 0m18.575s 00:16:38.370 sys 0m1.662s 00:16:38.370 05:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.370 05:13:57 -- common/autotest_common.sh@10 -- # set +x 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:16:38.370 05:13:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:38.370 05:13:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:38.370 05:13:57 -- common/autotest_common.sh@10 -- # set +x 00:16:38.370 ************************************ 00:16:38.370 START TEST raid_superblock_test 00:16:38.370 ************************************ 00:16:38.370 05:13:57 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@357 -- # raid_pid=71894 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@358 -- # waitforlisten 71894 /var/tmp/spdk-raid.sock 00:16:38.370 05:13:57 -- common/autotest_common.sh@819 -- # '[' -z 71894 ']' 00:16:38.370 05:13:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:38.370 05:13:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:38.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:38.370 05:13:57 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:38.370 05:13:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:38.370 05:13:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:38.370 05:13:57 -- common/autotest_common.sh@10 -- # set +x 00:16:38.370 [2024-07-26 05:13:57.243763] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:38.370 [2024-07-26 05:13:57.243947] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71894 ] 00:16:38.370 [2024-07-26 05:13:57.417620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.651 [2024-07-26 05:13:57.588169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.651 [2024-07-26 05:13:57.757937] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.217 05:13:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:39.217 05:13:58 -- common/autotest_common.sh@852 -- # return 0 00:16:39.217 05:13:58 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:39.217 05:13:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:39.217 05:13:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:39.217 05:13:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:39.217 05:13:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:39.217 05:13:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:39.217 05:13:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:39.217 05:13:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:39.217 05:13:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:39.475 malloc1 00:16:39.475 05:13:58 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:39.733 [2024-07-26 05:13:58.713092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:39.733 [2024-07-26 05:13:58.713182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.733 [2024-07-26 05:13:58.713221] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:16:39.733 [2024-07-26 05:13:58.713235] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.733 [2024-07-26 05:13:58.716301] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.733 [2024-07-26 05:13:58.716346] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:39.733 pt1 00:16:39.733 05:13:58 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:39.733 05:13:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:39.733 05:13:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:39.733 05:13:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:39.733 05:13:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:39.733 05:13:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:39.733 05:13:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:39.733 05:13:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:39.733 05:13:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:39.991 malloc2 00:16:39.991 05:13:58 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.249 [2024-07-26 05:13:59.173868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.249 [2024-07-26 05:13:59.173962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.249 [2024-07-26 05:13:59.174033] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:16:40.249 [2024-07-26 05:13:59.174053] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.249 [2024-07-26 05:13:59.176555] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.249 [2024-07-26 05:13:59.176608] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.249 pt2 00:16:40.249 05:13:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:40.249 05:13:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:40.249 05:13:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:40.249 05:13:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:40.249 05:13:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:40.249 05:13:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:40.249 05:13:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:40.249 05:13:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:40.249 05:13:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:40.507 malloc3 00:16:40.507 05:13:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:40.765 [2024-07-26 05:13:59.653567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:40.765 [2024-07-26 05:13:59.653636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.765 [2024-07-26 05:13:59.653671] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:16:40.765 [2024-07-26 05:13:59.653686] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.765 [2024-07-26 05:13:59.656252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.765 [2024-07-26 05:13:59.656294] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:40.765 pt3 00:16:40.765 05:13:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:40.765 05:13:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:40.765 05:13:59 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:40.765 [2024-07-26 05:13:59.861677] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:40.765 [2024-07-26 05:13:59.863785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.765 [2024-07-26 05:13:59.863882] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:40.765 [2024-07-26 05:13:59.864141] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:16:40.765 [2024-07-26 05:13:59.864166] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:40.765 [2024-07-26 05:13:59.864295] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:40.765 [2024-07-26 05:13:59.864676] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:16:40.765 [2024-07-26 05:13:59.864706] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:16:40.765 [2024-07-26 05:13:59.864898] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.023 05:13:59 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:41.023 05:13:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:41.023 05:13:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:41.023 05:13:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:41.023 05:13:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:41.023 05:13:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:41.023 05:13:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.023 05:13:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.023 05:13:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.023 05:13:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.024 05:13:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.024 05:13:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.024 05:14:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.024 "name": "raid_bdev1", 00:16:41.024 "uuid": "f12bf92e-f626-47da-8820-7c8827e054bd", 00:16:41.024 "strip_size_kb": 64, 00:16:41.024 "state": "online", 00:16:41.024 "raid_level": "raid0", 00:16:41.024 "superblock": true, 00:16:41.024 "num_base_bdevs": 3, 00:16:41.024 "num_base_bdevs_discovered": 3, 00:16:41.024 "num_base_bdevs_operational": 3, 00:16:41.024 "base_bdevs_list": [ 00:16:41.024 { 00:16:41.024 "name": "pt1", 00:16:41.024 "uuid": "fbf4388f-0344-5ff1-a689-1a2e8a0f9de6", 00:16:41.024 "is_configured": true, 00:16:41.024 "data_offset": 2048, 00:16:41.024 "data_size": 63488 00:16:41.024 }, 00:16:41.024 { 00:16:41.024 "name": "pt2", 00:16:41.024 "uuid": "c83c1012-09c3-5922-8d5f-b1f66b2a1baf", 00:16:41.024 "is_configured": true, 00:16:41.024 "data_offset": 2048, 00:16:41.024 "data_size": 63488 00:16:41.024 }, 00:16:41.024 { 00:16:41.024 "name": "pt3", 00:16:41.024 "uuid": "9072e824-5c4f-57e4-9a03-03a80ff847ff", 00:16:41.024 "is_configured": true, 00:16:41.024 "data_offset": 2048, 00:16:41.024 "data_size": 63488 00:16:41.024 } 00:16:41.024 ] 00:16:41.024 }' 00:16:41.024 05:14:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.024 05:14:00 -- common/autotest_common.sh@10 -- # set +x 00:16:41.282 05:14:00 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:41.282 05:14:00 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:41.539 [2024-07-26 05:14:00.557991] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.539 05:14:00 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f12bf92e-f626-47da-8820-7c8827e054bd 00:16:41.539 05:14:00 -- bdev/bdev_raid.sh@380 -- # '[' -z f12bf92e-f626-47da-8820-7c8827e054bd ']' 00:16:41.539 05:14:00 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:41.797 [2024-07-26 05:14:00.825842] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.797 [2024-07-26 05:14:00.825876] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.797 [2024-07-26 05:14:00.825980] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.797 [2024-07-26 05:14:00.826103] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.797 [2024-07-26 05:14:00.826125] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:16:41.797 05:14:00 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.797 05:14:00 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:42.055 05:14:01 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:42.055 05:14:01 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:42.055 05:14:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:42.055 05:14:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:42.312 05:14:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:42.312 05:14:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:42.570 05:14:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:42.570 05:14:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:42.828 05:14:01 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:42.828 05:14:01 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:43.086 05:14:02 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:43.086 05:14:02 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:43.086 05:14:02 -- common/autotest_common.sh@640 -- # local es=0 00:16:43.086 05:14:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:43.086 05:14:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.086 05:14:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:43.086 05:14:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.086 05:14:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:43.086 05:14:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.086 05:14:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:43.086 05:14:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.086 05:14:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:43.086 05:14:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:43.086 [2024-07-26 05:14:02.194179] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:43.086 [2024-07-26 05:14:02.196303] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:43.086 [2024-07-26 05:14:02.196379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:43.345 [2024-07-26 05:14:02.196441] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:43.345 [2024-07-26 05:14:02.196496] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:43.345 [2024-07-26 05:14:02.196526] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:43.345 [2024-07-26 05:14:02.196561] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.345 [2024-07-26 05:14:02.196578] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:16:43.345 request: 00:16:43.345 { 00:16:43.345 "name": "raid_bdev1", 00:16:43.345 "raid_level": "raid0", 00:16:43.345 "base_bdevs": [ 00:16:43.345 "malloc1", 00:16:43.345 "malloc2", 00:16:43.345 "malloc3" 00:16:43.345 ], 00:16:43.345 "superblock": false, 00:16:43.345 "strip_size_kb": 64, 00:16:43.345 "method": "bdev_raid_create", 00:16:43.345 "req_id": 1 00:16:43.345 } 00:16:43.345 Got JSON-RPC error response 00:16:43.345 response: 00:16:43.345 { 00:16:43.345 "code": -17, 00:16:43.345 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:43.345 } 00:16:43.345 05:14:02 -- common/autotest_common.sh@643 -- # es=1 00:16:43.345 05:14:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:43.345 05:14:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:43.345 05:14:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:43.345 05:14:02 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.345 05:14:02 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:43.345 05:14:02 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:43.345 05:14:02 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:43.345 05:14:02 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:43.603 [2024-07-26 05:14:02.594221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:43.603 [2024-07-26 05:14:02.594316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.603 [2024-07-26 05:14:02.594345] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:16:43.603 [2024-07-26 05:14:02.594376] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.603 [2024-07-26 05:14:02.596816] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.603 [2024-07-26 05:14:02.596870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:43.603 [2024-07-26 05:14:02.596987] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:43.603 [2024-07-26 05:14:02.597081] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:43.603 pt1 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.603 05:14:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.861 05:14:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.861 "name": "raid_bdev1", 00:16:43.861 "uuid": "f12bf92e-f626-47da-8820-7c8827e054bd", 00:16:43.861 "strip_size_kb": 64, 00:16:43.861 "state": "configuring", 00:16:43.861 "raid_level": "raid0", 00:16:43.861 "superblock": true, 00:16:43.861 "num_base_bdevs": 3, 00:16:43.861 "num_base_bdevs_discovered": 1, 00:16:43.861 "num_base_bdevs_operational": 3, 00:16:43.861 "base_bdevs_list": [ 00:16:43.861 { 00:16:43.861 "name": "pt1", 00:16:43.861 "uuid": "fbf4388f-0344-5ff1-a689-1a2e8a0f9de6", 00:16:43.861 "is_configured": true, 00:16:43.861 "data_offset": 2048, 00:16:43.861 "data_size": 63488 00:16:43.861 }, 00:16:43.861 { 00:16:43.861 "name": null, 00:16:43.861 "uuid": "c83c1012-09c3-5922-8d5f-b1f66b2a1baf", 00:16:43.861 "is_configured": false, 00:16:43.861 "data_offset": 2048, 00:16:43.861 "data_size": 63488 00:16:43.861 }, 00:16:43.861 { 00:16:43.861 "name": null, 00:16:43.861 "uuid": "9072e824-5c4f-57e4-9a03-03a80ff847ff", 00:16:43.861 "is_configured": false, 00:16:43.861 "data_offset": 2048, 00:16:43.861 "data_size": 63488 00:16:43.861 } 00:16:43.861 ] 00:16:43.861 }' 00:16:43.861 05:14:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.861 05:14:02 -- common/autotest_common.sh@10 -- # set +x 00:16:44.118 05:14:03 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:44.118 05:14:03 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:44.376 [2024-07-26 05:14:03.350471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:44.376 [2024-07-26 05:14:03.350744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.376 [2024-07-26 05:14:03.350785] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:16:44.376 [2024-07-26 05:14:03.350802] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.376 [2024-07-26 05:14:03.351330] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.376 [2024-07-26 05:14:03.351358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:44.376 [2024-07-26 05:14:03.351464] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:44.376 [2024-07-26 05:14:03.351493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.376 pt2 00:16:44.376 05:14:03 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:44.634 [2024-07-26 05:14:03.558505] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.634 05:14:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.891 05:14:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.891 "name": "raid_bdev1", 00:16:44.891 "uuid": "f12bf92e-f626-47da-8820-7c8827e054bd", 00:16:44.891 "strip_size_kb": 64, 00:16:44.891 "state": "configuring", 00:16:44.891 "raid_level": "raid0", 00:16:44.891 "superblock": true, 00:16:44.891 "num_base_bdevs": 3, 00:16:44.891 "num_base_bdevs_discovered": 1, 00:16:44.891 "num_base_bdevs_operational": 3, 00:16:44.891 "base_bdevs_list": [ 00:16:44.891 { 00:16:44.891 "name": "pt1", 00:16:44.891 "uuid": "fbf4388f-0344-5ff1-a689-1a2e8a0f9de6", 00:16:44.891 "is_configured": true, 00:16:44.891 "data_offset": 2048, 00:16:44.891 "data_size": 63488 00:16:44.891 }, 00:16:44.891 { 00:16:44.891 "name": null, 00:16:44.891 "uuid": "c83c1012-09c3-5922-8d5f-b1f66b2a1baf", 00:16:44.891 "is_configured": false, 00:16:44.891 "data_offset": 2048, 00:16:44.891 "data_size": 63488 00:16:44.891 }, 00:16:44.891 { 00:16:44.891 "name": null, 00:16:44.891 "uuid": "9072e824-5c4f-57e4-9a03-03a80ff847ff", 00:16:44.891 "is_configured": false, 00:16:44.891 "data_offset": 2048, 00:16:44.891 "data_size": 63488 00:16:44.891 } 00:16:44.891 ] 00:16:44.891 }' 00:16:44.891 05:14:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.891 05:14:03 -- common/autotest_common.sh@10 -- # set +x 00:16:45.149 05:14:04 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:45.149 05:14:04 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:45.149 05:14:04 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:45.408 [2024-07-26 05:14:04.358731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:45.408 [2024-07-26 05:14:04.358987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.408 [2024-07-26 05:14:04.359043] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:16:45.408 [2024-07-26 05:14:04.359059] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.408 [2024-07-26 05:14:04.359559] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.408 [2024-07-26 05:14:04.359583] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:45.408 [2024-07-26 05:14:04.359679] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:45.408 [2024-07-26 05:14:04.359706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:45.408 pt2 00:16:45.408 05:14:04 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:45.408 05:14:04 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:45.408 05:14:04 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:45.666 [2024-07-26 05:14:04.622819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:45.666 [2024-07-26 05:14:04.623137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.666 [2024-07-26 05:14:04.623214] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:16:45.666 [2024-07-26 05:14:04.623325] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.666 [2024-07-26 05:14:04.623889] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.666 [2024-07-26 05:14:04.624125] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:45.666 [2024-07-26 05:14:04.624272] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:45.666 [2024-07-26 05:14:04.624401] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:45.666 [2024-07-26 05:14:04.624598] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:16:45.666 [2024-07-26 05:14:04.624715] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:45.666 [2024-07-26 05:14:04.624862] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:45.666 [2024-07-26 05:14:04.625252] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:16:45.666 [2024-07-26 05:14:04.625378] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:16:45.666 [2024-07-26 05:14:04.625645] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.666 pt3 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.666 05:14:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.925 05:14:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.925 "name": "raid_bdev1", 00:16:45.925 "uuid": "f12bf92e-f626-47da-8820-7c8827e054bd", 00:16:45.925 "strip_size_kb": 64, 00:16:45.925 "state": "online", 00:16:45.925 "raid_level": "raid0", 00:16:45.925 "superblock": true, 00:16:45.925 "num_base_bdevs": 3, 00:16:45.925 "num_base_bdevs_discovered": 3, 00:16:45.925 "num_base_bdevs_operational": 3, 00:16:45.925 "base_bdevs_list": [ 00:16:45.925 { 00:16:45.925 "name": "pt1", 00:16:45.925 "uuid": "fbf4388f-0344-5ff1-a689-1a2e8a0f9de6", 00:16:45.925 "is_configured": true, 00:16:45.925 "data_offset": 2048, 00:16:45.925 "data_size": 63488 00:16:45.925 }, 00:16:45.925 { 00:16:45.925 "name": "pt2", 00:16:45.925 "uuid": "c83c1012-09c3-5922-8d5f-b1f66b2a1baf", 00:16:45.925 "is_configured": true, 00:16:45.925 "data_offset": 2048, 00:16:45.925 "data_size": 63488 00:16:45.925 }, 00:16:45.925 { 00:16:45.925 "name": "pt3", 00:16:45.925 "uuid": "9072e824-5c4f-57e4-9a03-03a80ff847ff", 00:16:45.925 "is_configured": true, 00:16:45.925 "data_offset": 2048, 00:16:45.925 "data_size": 63488 00:16:45.925 } 00:16:45.925 ] 00:16:45.925 }' 00:16:45.925 05:14:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.925 05:14:04 -- common/autotest_common.sh@10 -- # set +x 00:16:46.183 05:14:05 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:46.183 05:14:05 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:46.441 [2024-07-26 05:14:05.347290] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.441 05:14:05 -- bdev/bdev_raid.sh@430 -- # '[' f12bf92e-f626-47da-8820-7c8827e054bd '!=' f12bf92e-f626-47da-8820-7c8827e054bd ']' 00:16:46.441 05:14:05 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:46.441 05:14:05 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:46.441 05:14:05 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:46.441 05:14:05 -- bdev/bdev_raid.sh@511 -- # killprocess 71894 00:16:46.441 05:14:05 -- common/autotest_common.sh@926 -- # '[' -z 71894 ']' 00:16:46.441 05:14:05 -- common/autotest_common.sh@930 -- # kill -0 71894 00:16:46.441 05:14:05 -- common/autotest_common.sh@931 -- # uname 00:16:46.441 05:14:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:46.441 05:14:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71894 00:16:46.441 killing process with pid 71894 00:16:46.441 05:14:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:46.441 05:14:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:46.441 05:14:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71894' 00:16:46.441 05:14:05 -- common/autotest_common.sh@945 -- # kill 71894 00:16:46.441 [2024-07-26 05:14:05.397356] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.441 05:14:05 -- common/autotest_common.sh@950 -- # wait 71894 00:16:46.441 [2024-07-26 05:14:05.397484] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.441 [2024-07-26 05:14:05.397544] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.441 [2024-07-26 05:14:05.397560] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:16:46.700 [2024-07-26 05:14:05.615566] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:47.635 00:16:47.635 real 0m9.471s 00:16:47.635 user 0m15.555s 00:16:47.635 sys 0m1.385s 00:16:47.635 05:14:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.635 05:14:06 -- common/autotest_common.sh@10 -- # set +x 00:16:47.635 ************************************ 00:16:47.635 END TEST raid_superblock_test 00:16:47.635 ************************************ 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:47.635 05:14:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:47.635 05:14:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:47.635 05:14:06 -- common/autotest_common.sh@10 -- # set +x 00:16:47.635 ************************************ 00:16:47.635 START TEST raid_state_function_test 00:16:47.635 ************************************ 00:16:47.635 05:14:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:47.635 Process raid pid: 72175 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=72175 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 72175' 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 72175 /var/tmp/spdk-raid.sock 00:16:47.635 05:14:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:47.635 05:14:06 -- common/autotest_common.sh@819 -- # '[' -z 72175 ']' 00:16:47.635 05:14:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:47.635 05:14:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:47.635 05:14:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:47.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:47.635 05:14:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:47.635 05:14:06 -- common/autotest_common.sh@10 -- # set +x 00:16:47.908 [2024-07-26 05:14:06.777824] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:47.908 [2024-07-26 05:14:06.777979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.908 [2024-07-26 05:14:06.942112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.183 [2024-07-26 05:14:07.112677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.183 [2024-07-26 05:14:07.280307] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.749 05:14:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:48.749 05:14:07 -- common/autotest_common.sh@852 -- # return 0 00:16:48.749 05:14:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:49.008 [2024-07-26 05:14:07.866004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.008 [2024-07-26 05:14:07.866132] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.008 [2024-07-26 05:14:07.866150] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.008 [2024-07-26 05:14:07.866167] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.008 [2024-07-26 05:14:07.866177] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.008 [2024-07-26 05:14:07.866193] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.008 05:14:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.267 05:14:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:49.267 "name": "Existed_Raid", 00:16:49.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.267 "strip_size_kb": 64, 00:16:49.267 "state": "configuring", 00:16:49.267 "raid_level": "concat", 00:16:49.267 "superblock": false, 00:16:49.267 "num_base_bdevs": 3, 00:16:49.267 "num_base_bdevs_discovered": 0, 00:16:49.267 "num_base_bdevs_operational": 3, 00:16:49.267 "base_bdevs_list": [ 00:16:49.267 { 00:16:49.267 "name": "BaseBdev1", 00:16:49.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.267 "is_configured": false, 00:16:49.267 "data_offset": 0, 00:16:49.267 "data_size": 0 00:16:49.267 }, 00:16:49.267 { 00:16:49.267 "name": "BaseBdev2", 00:16:49.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.267 "is_configured": false, 00:16:49.267 "data_offset": 0, 00:16:49.267 "data_size": 0 00:16:49.267 }, 00:16:49.267 { 00:16:49.267 "name": "BaseBdev3", 00:16:49.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.267 "is_configured": false, 00:16:49.267 "data_offset": 0, 00:16:49.267 "data_size": 0 00:16:49.267 } 00:16:49.267 ] 00:16:49.267 }' 00:16:49.267 05:14:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:49.267 05:14:08 -- common/autotest_common.sh@10 -- # set +x 00:16:49.524 05:14:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:49.783 [2024-07-26 05:14:08.654073] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.783 [2024-07-26 05:14:08.654326] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:49.783 05:14:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:49.783 [2024-07-26 05:14:08.862162] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.783 [2024-07-26 05:14:08.862378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.783 [2024-07-26 05:14:08.862405] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.783 [2024-07-26 05:14:08.862426] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.783 [2024-07-26 05:14:08.862436] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.783 [2024-07-26 05:14:08.862450] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.783 05:14:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:50.040 [2024-07-26 05:14:09.145511] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.040 BaseBdev1 00:16:50.299 05:14:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:50.299 05:14:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:50.299 05:14:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:50.299 05:14:09 -- common/autotest_common.sh@889 -- # local i 00:16:50.299 05:14:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:50.299 05:14:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:50.299 05:14:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.299 05:14:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.557 [ 00:16:50.557 { 00:16:50.557 "name": "BaseBdev1", 00:16:50.557 "aliases": [ 00:16:50.557 "6cceb72f-280c-447b-81cb-aa64c587b697" 00:16:50.557 ], 00:16:50.557 "product_name": "Malloc disk", 00:16:50.557 "block_size": 512, 00:16:50.557 "num_blocks": 65536, 00:16:50.557 "uuid": "6cceb72f-280c-447b-81cb-aa64c587b697", 00:16:50.557 "assigned_rate_limits": { 00:16:50.557 "rw_ios_per_sec": 0, 00:16:50.557 "rw_mbytes_per_sec": 0, 00:16:50.557 "r_mbytes_per_sec": 0, 00:16:50.557 "w_mbytes_per_sec": 0 00:16:50.557 }, 00:16:50.557 "claimed": true, 00:16:50.557 "claim_type": "exclusive_write", 00:16:50.557 "zoned": false, 00:16:50.557 "supported_io_types": { 00:16:50.557 "read": true, 00:16:50.557 "write": true, 00:16:50.557 "unmap": true, 00:16:50.557 "write_zeroes": true, 00:16:50.557 "flush": true, 00:16:50.557 "reset": true, 00:16:50.557 "compare": false, 00:16:50.557 "compare_and_write": false, 00:16:50.557 "abort": true, 00:16:50.557 "nvme_admin": false, 00:16:50.557 "nvme_io": false 00:16:50.557 }, 00:16:50.557 "memory_domains": [ 00:16:50.557 { 00:16:50.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.557 "dma_device_type": 2 00:16:50.557 } 00:16:50.557 ], 00:16:50.557 "driver_specific": {} 00:16:50.557 } 00:16:50.557 ] 00:16:50.557 05:14:09 -- common/autotest_common.sh@895 -- # return 0 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.557 05:14:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.816 05:14:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.816 "name": "Existed_Raid", 00:16:50.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.816 "strip_size_kb": 64, 00:16:50.816 "state": "configuring", 00:16:50.816 "raid_level": "concat", 00:16:50.816 "superblock": false, 00:16:50.816 "num_base_bdevs": 3, 00:16:50.816 "num_base_bdevs_discovered": 1, 00:16:50.816 "num_base_bdevs_operational": 3, 00:16:50.816 "base_bdevs_list": [ 00:16:50.816 { 00:16:50.816 "name": "BaseBdev1", 00:16:50.816 "uuid": "6cceb72f-280c-447b-81cb-aa64c587b697", 00:16:50.816 "is_configured": true, 00:16:50.816 "data_offset": 0, 00:16:50.816 "data_size": 65536 00:16:50.816 }, 00:16:50.816 { 00:16:50.816 "name": "BaseBdev2", 00:16:50.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.816 "is_configured": false, 00:16:50.816 "data_offset": 0, 00:16:50.816 "data_size": 0 00:16:50.816 }, 00:16:50.816 { 00:16:50.816 "name": "BaseBdev3", 00:16:50.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.816 "is_configured": false, 00:16:50.816 "data_offset": 0, 00:16:50.816 "data_size": 0 00:16:50.816 } 00:16:50.816 ] 00:16:50.816 }' 00:16:50.816 05:14:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.816 05:14:09 -- common/autotest_common.sh@10 -- # set +x 00:16:51.074 05:14:10 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:51.332 [2024-07-26 05:14:10.329859] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.332 [2024-07-26 05:14:10.329950] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:51.332 05:14:10 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:51.332 05:14:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:51.613 [2024-07-26 05:14:10.581984] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.613 [2024-07-26 05:14:10.584118] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.613 [2024-07-26 05:14:10.584200] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.613 [2024-07-26 05:14:10.584215] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.613 [2024-07-26 05:14:10.584229] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.613 05:14:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.871 05:14:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.871 "name": "Existed_Raid", 00:16:51.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.871 "strip_size_kb": 64, 00:16:51.871 "state": "configuring", 00:16:51.871 "raid_level": "concat", 00:16:51.871 "superblock": false, 00:16:51.871 "num_base_bdevs": 3, 00:16:51.871 "num_base_bdevs_discovered": 1, 00:16:51.871 "num_base_bdevs_operational": 3, 00:16:51.871 "base_bdevs_list": [ 00:16:51.871 { 00:16:51.871 "name": "BaseBdev1", 00:16:51.871 "uuid": "6cceb72f-280c-447b-81cb-aa64c587b697", 00:16:51.871 "is_configured": true, 00:16:51.871 "data_offset": 0, 00:16:51.871 "data_size": 65536 00:16:51.871 }, 00:16:51.871 { 00:16:51.871 "name": "BaseBdev2", 00:16:51.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.871 "is_configured": false, 00:16:51.871 "data_offset": 0, 00:16:51.871 "data_size": 0 00:16:51.871 }, 00:16:51.871 { 00:16:51.871 "name": "BaseBdev3", 00:16:51.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.871 "is_configured": false, 00:16:51.871 "data_offset": 0, 00:16:51.871 "data_size": 0 00:16:51.871 } 00:16:51.871 ] 00:16:51.871 }' 00:16:51.871 05:14:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.871 05:14:10 -- common/autotest_common.sh@10 -- # set +x 00:16:52.129 05:14:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.390 [2024-07-26 05:14:11.339097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.390 BaseBdev2 00:16:52.390 05:14:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:52.390 05:14:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:52.390 05:14:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:52.390 05:14:11 -- common/autotest_common.sh@889 -- # local i 00:16:52.390 05:14:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:52.390 05:14:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:52.390 05:14:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.649 05:14:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.908 [ 00:16:52.908 { 00:16:52.908 "name": "BaseBdev2", 00:16:52.908 "aliases": [ 00:16:52.908 "a8185263-a688-44ec-b50a-e1216b83be11" 00:16:52.908 ], 00:16:52.908 "product_name": "Malloc disk", 00:16:52.908 "block_size": 512, 00:16:52.908 "num_blocks": 65536, 00:16:52.908 "uuid": "a8185263-a688-44ec-b50a-e1216b83be11", 00:16:52.908 "assigned_rate_limits": { 00:16:52.908 "rw_ios_per_sec": 0, 00:16:52.908 "rw_mbytes_per_sec": 0, 00:16:52.908 "r_mbytes_per_sec": 0, 00:16:52.908 "w_mbytes_per_sec": 0 00:16:52.908 }, 00:16:52.908 "claimed": true, 00:16:52.908 "claim_type": "exclusive_write", 00:16:52.908 "zoned": false, 00:16:52.908 "supported_io_types": { 00:16:52.908 "read": true, 00:16:52.908 "write": true, 00:16:52.908 "unmap": true, 00:16:52.908 "write_zeroes": true, 00:16:52.908 "flush": true, 00:16:52.908 "reset": true, 00:16:52.908 "compare": false, 00:16:52.908 "compare_and_write": false, 00:16:52.908 "abort": true, 00:16:52.908 "nvme_admin": false, 00:16:52.908 "nvme_io": false 00:16:52.908 }, 00:16:52.908 "memory_domains": [ 00:16:52.908 { 00:16:52.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.908 "dma_device_type": 2 00:16:52.908 } 00:16:52.908 ], 00:16:52.908 "driver_specific": {} 00:16:52.908 } 00:16:52.908 ] 00:16:52.908 05:14:11 -- common/autotest_common.sh@895 -- # return 0 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.908 05:14:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.166 05:14:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.166 "name": "Existed_Raid", 00:16:53.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.166 "strip_size_kb": 64, 00:16:53.166 "state": "configuring", 00:16:53.166 "raid_level": "concat", 00:16:53.166 "superblock": false, 00:16:53.166 "num_base_bdevs": 3, 00:16:53.166 "num_base_bdevs_discovered": 2, 00:16:53.166 "num_base_bdevs_operational": 3, 00:16:53.166 "base_bdevs_list": [ 00:16:53.166 { 00:16:53.166 "name": "BaseBdev1", 00:16:53.166 "uuid": "6cceb72f-280c-447b-81cb-aa64c587b697", 00:16:53.166 "is_configured": true, 00:16:53.166 "data_offset": 0, 00:16:53.166 "data_size": 65536 00:16:53.166 }, 00:16:53.166 { 00:16:53.166 "name": "BaseBdev2", 00:16:53.166 "uuid": "a8185263-a688-44ec-b50a-e1216b83be11", 00:16:53.166 "is_configured": true, 00:16:53.166 "data_offset": 0, 00:16:53.166 "data_size": 65536 00:16:53.166 }, 00:16:53.166 { 00:16:53.166 "name": "BaseBdev3", 00:16:53.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.166 "is_configured": false, 00:16:53.166 "data_offset": 0, 00:16:53.166 "data_size": 0 00:16:53.166 } 00:16:53.166 ] 00:16:53.166 }' 00:16:53.166 05:14:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.166 05:14:12 -- common/autotest_common.sh@10 -- # set +x 00:16:53.425 05:14:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:53.684 [2024-07-26 05:14:12.549975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:53.684 [2024-07-26 05:14:12.550081] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:16:53.684 [2024-07-26 05:14:12.550098] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:53.684 [2024-07-26 05:14:12.550221] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:53.684 [2024-07-26 05:14:12.550643] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:16:53.684 [2024-07-26 05:14:12.550660] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:16:53.684 [2024-07-26 05:14:12.550930] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.684 BaseBdev3 00:16:53.684 05:14:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:53.684 05:14:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:53.684 05:14:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:53.684 05:14:12 -- common/autotest_common.sh@889 -- # local i 00:16:53.684 05:14:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:53.684 05:14:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:53.684 05:14:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:53.684 05:14:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:53.943 [ 00:16:53.943 { 00:16:53.943 "name": "BaseBdev3", 00:16:53.943 "aliases": [ 00:16:53.943 "556b0e5e-1100-4017-b905-e1aa9abb923b" 00:16:53.943 ], 00:16:53.943 "product_name": "Malloc disk", 00:16:53.943 "block_size": 512, 00:16:53.943 "num_blocks": 65536, 00:16:53.943 "uuid": "556b0e5e-1100-4017-b905-e1aa9abb923b", 00:16:53.943 "assigned_rate_limits": { 00:16:53.943 "rw_ios_per_sec": 0, 00:16:53.943 "rw_mbytes_per_sec": 0, 00:16:53.943 "r_mbytes_per_sec": 0, 00:16:53.943 "w_mbytes_per_sec": 0 00:16:53.943 }, 00:16:53.943 "claimed": true, 00:16:53.943 "claim_type": "exclusive_write", 00:16:53.943 "zoned": false, 00:16:53.943 "supported_io_types": { 00:16:53.943 "read": true, 00:16:53.943 "write": true, 00:16:53.943 "unmap": true, 00:16:53.943 "write_zeroes": true, 00:16:53.943 "flush": true, 00:16:53.943 "reset": true, 00:16:53.943 "compare": false, 00:16:53.943 "compare_and_write": false, 00:16:53.943 "abort": true, 00:16:53.943 "nvme_admin": false, 00:16:53.943 "nvme_io": false 00:16:53.943 }, 00:16:53.943 "memory_domains": [ 00:16:53.943 { 00:16:53.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.943 "dma_device_type": 2 00:16:53.943 } 00:16:53.943 ], 00:16:53.943 "driver_specific": {} 00:16:53.943 } 00:16:53.943 ] 00:16:53.943 05:14:12 -- common/autotest_common.sh@895 -- # return 0 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.943 05:14:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.201 05:14:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:54.201 "name": "Existed_Raid", 00:16:54.201 "uuid": "702f2138-88a3-4585-aacc-daa7e5c90cab", 00:16:54.201 "strip_size_kb": 64, 00:16:54.201 "state": "online", 00:16:54.201 "raid_level": "concat", 00:16:54.201 "superblock": false, 00:16:54.201 "num_base_bdevs": 3, 00:16:54.201 "num_base_bdevs_discovered": 3, 00:16:54.201 "num_base_bdevs_operational": 3, 00:16:54.201 "base_bdevs_list": [ 00:16:54.201 { 00:16:54.201 "name": "BaseBdev1", 00:16:54.201 "uuid": "6cceb72f-280c-447b-81cb-aa64c587b697", 00:16:54.201 "is_configured": true, 00:16:54.201 "data_offset": 0, 00:16:54.201 "data_size": 65536 00:16:54.201 }, 00:16:54.201 { 00:16:54.201 "name": "BaseBdev2", 00:16:54.201 "uuid": "a8185263-a688-44ec-b50a-e1216b83be11", 00:16:54.201 "is_configured": true, 00:16:54.201 "data_offset": 0, 00:16:54.201 "data_size": 65536 00:16:54.201 }, 00:16:54.201 { 00:16:54.201 "name": "BaseBdev3", 00:16:54.201 "uuid": "556b0e5e-1100-4017-b905-e1aa9abb923b", 00:16:54.201 "is_configured": true, 00:16:54.201 "data_offset": 0, 00:16:54.201 "data_size": 65536 00:16:54.201 } 00:16:54.201 ] 00:16:54.201 }' 00:16:54.201 05:14:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:54.201 05:14:13 -- common/autotest_common.sh@10 -- # set +x 00:16:54.459 05:14:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:54.717 [2024-07-26 05:14:13.778530] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.717 [2024-07-26 05:14:13.778589] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.717 [2024-07-26 05:14:13.778644] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.975 05:14:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.234 05:14:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.234 "name": "Existed_Raid", 00:16:55.234 "uuid": "702f2138-88a3-4585-aacc-daa7e5c90cab", 00:16:55.234 "strip_size_kb": 64, 00:16:55.234 "state": "offline", 00:16:55.234 "raid_level": "concat", 00:16:55.234 "superblock": false, 00:16:55.234 "num_base_bdevs": 3, 00:16:55.234 "num_base_bdevs_discovered": 2, 00:16:55.234 "num_base_bdevs_operational": 2, 00:16:55.234 "base_bdevs_list": [ 00:16:55.234 { 00:16:55.234 "name": null, 00:16:55.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.234 "is_configured": false, 00:16:55.234 "data_offset": 0, 00:16:55.234 "data_size": 65536 00:16:55.234 }, 00:16:55.234 { 00:16:55.234 "name": "BaseBdev2", 00:16:55.234 "uuid": "a8185263-a688-44ec-b50a-e1216b83be11", 00:16:55.234 "is_configured": true, 00:16:55.234 "data_offset": 0, 00:16:55.234 "data_size": 65536 00:16:55.234 }, 00:16:55.234 { 00:16:55.234 "name": "BaseBdev3", 00:16:55.234 "uuid": "556b0e5e-1100-4017-b905-e1aa9abb923b", 00:16:55.234 "is_configured": true, 00:16:55.234 "data_offset": 0, 00:16:55.234 "data_size": 65536 00:16:55.234 } 00:16:55.234 ] 00:16:55.234 }' 00:16:55.234 05:14:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.234 05:14:14 -- common/autotest_common.sh@10 -- # set +x 00:16:55.492 05:14:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:55.492 05:14:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:55.492 05:14:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.492 05:14:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:55.750 05:14:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:55.750 05:14:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.750 05:14:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:56.009 [2024-07-26 05:14:14.866359] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:56.009 05:14:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:56.009 05:14:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:56.009 05:14:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.009 05:14:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:56.267 05:14:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:56.267 05:14:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:56.268 05:14:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:56.526 [2024-07-26 05:14:15.412721] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:56.526 [2024-07-26 05:14:15.412810] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:16:56.526 05:14:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:56.526 05:14:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:56.526 05:14:15 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.526 05:14:15 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:56.785 05:14:15 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:56.785 05:14:15 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:56.785 05:14:15 -- bdev/bdev_raid.sh@287 -- # killprocess 72175 00:16:56.785 05:14:15 -- common/autotest_common.sh@926 -- # '[' -z 72175 ']' 00:16:56.785 05:14:15 -- common/autotest_common.sh@930 -- # kill -0 72175 00:16:56.785 05:14:15 -- common/autotest_common.sh@931 -- # uname 00:16:56.785 05:14:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.785 05:14:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72175 00:16:56.785 05:14:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:56.785 05:14:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:56.785 killing process with pid 72175 00:16:56.785 05:14:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72175' 00:16:56.785 05:14:15 -- common/autotest_common.sh@945 -- # kill 72175 00:16:56.785 [2024-07-26 05:14:15.753507] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.785 05:14:15 -- common/autotest_common.sh@950 -- # wait 72175 00:16:56.785 [2024-07-26 05:14:15.753624] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:58.163 00:16:58.163 real 0m10.194s 00:16:58.163 user 0m16.774s 00:16:58.163 sys 0m1.526s 00:16:58.163 05:14:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.163 05:14:16 -- common/autotest_common.sh@10 -- # set +x 00:16:58.163 ************************************ 00:16:58.163 END TEST raid_state_function_test 00:16:58.163 ************************************ 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:58.163 05:14:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:58.163 05:14:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:58.163 05:14:16 -- common/autotest_common.sh@10 -- # set +x 00:16:58.163 ************************************ 00:16:58.163 START TEST raid_state_function_test_sb 00:16:58.163 ************************************ 00:16:58.163 05:14:16 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:58.163 05:14:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:58.164 05:14:16 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:58.164 05:14:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:58.164 05:14:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:58.164 05:14:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:58.164 05:14:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:58.164 05:14:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=72634 00:16:58.164 Process raid pid: 72634 00:16:58.164 05:14:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 72634' 00:16:58.164 05:14:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:58.164 05:14:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 72634 /var/tmp/spdk-raid.sock 00:16:58.164 05:14:16 -- common/autotest_common.sh@819 -- # '[' -z 72634 ']' 00:16:58.164 05:14:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:58.164 05:14:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:58.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:58.164 05:14:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:58.164 05:14:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:58.164 05:14:16 -- common/autotest_common.sh@10 -- # set +x 00:16:58.164 [2024-07-26 05:14:17.019862] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:58.164 [2024-07-26 05:14:17.020553] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.164 [2024-07-26 05:14:17.193620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.438 [2024-07-26 05:14:17.430933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.709 [2024-07-26 05:14:17.589976] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.967 05:14:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:58.967 05:14:17 -- common/autotest_common.sh@852 -- # return 0 00:16:58.967 05:14:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:59.225 [2024-07-26 05:14:18.238794] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:59.225 [2024-07-26 05:14:18.239305] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:59.225 [2024-07-26 05:14:18.239329] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:59.225 [2024-07-26 05:14:18.239438] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:59.225 [2024-07-26 05:14:18.239452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:59.225 [2024-07-26 05:14:18.239531] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:59.225 05:14:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:59.225 05:14:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:59.225 05:14:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:59.225 05:14:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:59.225 05:14:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:59.226 05:14:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:59.226 05:14:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.226 05:14:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.226 05:14:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.226 05:14:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.226 05:14:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.226 05:14:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.484 05:14:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.484 "name": "Existed_Raid", 00:16:59.484 "uuid": "e397cb56-c711-41e3-831b-0b3fff972d25", 00:16:59.484 "strip_size_kb": 64, 00:16:59.484 "state": "configuring", 00:16:59.484 "raid_level": "concat", 00:16:59.484 "superblock": true, 00:16:59.484 "num_base_bdevs": 3, 00:16:59.484 "num_base_bdevs_discovered": 0, 00:16:59.484 "num_base_bdevs_operational": 3, 00:16:59.484 "base_bdevs_list": [ 00:16:59.484 { 00:16:59.484 "name": "BaseBdev1", 00:16:59.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.484 "is_configured": false, 00:16:59.484 "data_offset": 0, 00:16:59.484 "data_size": 0 00:16:59.484 }, 00:16:59.484 { 00:16:59.484 "name": "BaseBdev2", 00:16:59.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.484 "is_configured": false, 00:16:59.484 "data_offset": 0, 00:16:59.484 "data_size": 0 00:16:59.484 }, 00:16:59.484 { 00:16:59.484 "name": "BaseBdev3", 00:16:59.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.484 "is_configured": false, 00:16:59.484 "data_offset": 0, 00:16:59.484 "data_size": 0 00:16:59.484 } 00:16:59.484 ] 00:16:59.484 }' 00:16:59.484 05:14:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.484 05:14:18 -- common/autotest_common.sh@10 -- # set +x 00:16:59.742 05:14:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:00.000 [2024-07-26 05:14:19.030843] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.000 [2024-07-26 05:14:19.030914] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:00.000 05:14:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:00.258 [2024-07-26 05:14:19.242970] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.258 [2024-07-26 05:14:19.243502] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.258 [2024-07-26 05:14:19.243537] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.258 [2024-07-26 05:14:19.243671] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.258 [2024-07-26 05:14:19.243686] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:00.258 [2024-07-26 05:14:19.243782] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:00.258 05:14:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:00.516 [2024-07-26 05:14:19.483700] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.516 BaseBdev1 00:17:00.516 05:14:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:00.516 05:14:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:00.516 05:14:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:00.516 05:14:19 -- common/autotest_common.sh@889 -- # local i 00:17:00.516 05:14:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:00.516 05:14:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:00.516 05:14:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.775 05:14:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:01.034 [ 00:17:01.034 { 00:17:01.034 "name": "BaseBdev1", 00:17:01.034 "aliases": [ 00:17:01.034 "0d4ce961-bfe6-4a06-954e-9f3917e0de80" 00:17:01.034 ], 00:17:01.034 "product_name": "Malloc disk", 00:17:01.034 "block_size": 512, 00:17:01.034 "num_blocks": 65536, 00:17:01.034 "uuid": "0d4ce961-bfe6-4a06-954e-9f3917e0de80", 00:17:01.034 "assigned_rate_limits": { 00:17:01.034 "rw_ios_per_sec": 0, 00:17:01.034 "rw_mbytes_per_sec": 0, 00:17:01.034 "r_mbytes_per_sec": 0, 00:17:01.034 "w_mbytes_per_sec": 0 00:17:01.034 }, 00:17:01.034 "claimed": true, 00:17:01.034 "claim_type": "exclusive_write", 00:17:01.034 "zoned": false, 00:17:01.034 "supported_io_types": { 00:17:01.034 "read": true, 00:17:01.034 "write": true, 00:17:01.034 "unmap": true, 00:17:01.034 "write_zeroes": true, 00:17:01.034 "flush": true, 00:17:01.034 "reset": true, 00:17:01.034 "compare": false, 00:17:01.034 "compare_and_write": false, 00:17:01.034 "abort": true, 00:17:01.034 "nvme_admin": false, 00:17:01.034 "nvme_io": false 00:17:01.034 }, 00:17:01.034 "memory_domains": [ 00:17:01.034 { 00:17:01.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.034 "dma_device_type": 2 00:17:01.034 } 00:17:01.034 ], 00:17:01.034 "driver_specific": {} 00:17:01.034 } 00:17:01.034 ] 00:17:01.034 05:14:19 -- common/autotest_common.sh@895 -- # return 0 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.034 05:14:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.034 05:14:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.034 "name": "Existed_Raid", 00:17:01.034 "uuid": "8993da81-4518-4c5f-af76-87f371657711", 00:17:01.034 "strip_size_kb": 64, 00:17:01.034 "state": "configuring", 00:17:01.034 "raid_level": "concat", 00:17:01.034 "superblock": true, 00:17:01.034 "num_base_bdevs": 3, 00:17:01.034 "num_base_bdevs_discovered": 1, 00:17:01.034 "num_base_bdevs_operational": 3, 00:17:01.034 "base_bdevs_list": [ 00:17:01.034 { 00:17:01.034 "name": "BaseBdev1", 00:17:01.034 "uuid": "0d4ce961-bfe6-4a06-954e-9f3917e0de80", 00:17:01.034 "is_configured": true, 00:17:01.034 "data_offset": 2048, 00:17:01.034 "data_size": 63488 00:17:01.034 }, 00:17:01.034 { 00:17:01.034 "name": "BaseBdev2", 00:17:01.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.034 "is_configured": false, 00:17:01.034 "data_offset": 0, 00:17:01.034 "data_size": 0 00:17:01.034 }, 00:17:01.034 { 00:17:01.034 "name": "BaseBdev3", 00:17:01.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.034 "is_configured": false, 00:17:01.034 "data_offset": 0, 00:17:01.034 "data_size": 0 00:17:01.034 } 00:17:01.034 ] 00:17:01.034 }' 00:17:01.034 05:14:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.034 05:14:20 -- common/autotest_common.sh@10 -- # set +x 00:17:01.600 05:14:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:01.600 [2024-07-26 05:14:20.700056] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.600 [2024-07-26 05:14:20.700133] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:01.878 05:14:20 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:01.878 05:14:20 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:02.148 05:14:20 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:02.148 BaseBdev1 00:17:02.148 05:14:21 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:02.148 05:14:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:02.148 05:14:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:02.148 05:14:21 -- common/autotest_common.sh@889 -- # local i 00:17:02.148 05:14:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:02.148 05:14:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:02.148 05:14:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:02.406 05:14:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:02.665 [ 00:17:02.665 { 00:17:02.665 "name": "BaseBdev1", 00:17:02.665 "aliases": [ 00:17:02.665 "611aaac3-7bc0-4f90-b168-d13397dd81e1" 00:17:02.665 ], 00:17:02.665 "product_name": "Malloc disk", 00:17:02.665 "block_size": 512, 00:17:02.665 "num_blocks": 65536, 00:17:02.665 "uuid": "611aaac3-7bc0-4f90-b168-d13397dd81e1", 00:17:02.665 "assigned_rate_limits": { 00:17:02.665 "rw_ios_per_sec": 0, 00:17:02.665 "rw_mbytes_per_sec": 0, 00:17:02.665 "r_mbytes_per_sec": 0, 00:17:02.665 "w_mbytes_per_sec": 0 00:17:02.665 }, 00:17:02.665 "claimed": false, 00:17:02.665 "zoned": false, 00:17:02.665 "supported_io_types": { 00:17:02.665 "read": true, 00:17:02.665 "write": true, 00:17:02.665 "unmap": true, 00:17:02.665 "write_zeroes": true, 00:17:02.665 "flush": true, 00:17:02.665 "reset": true, 00:17:02.665 "compare": false, 00:17:02.665 "compare_and_write": false, 00:17:02.665 "abort": true, 00:17:02.665 "nvme_admin": false, 00:17:02.665 "nvme_io": false 00:17:02.665 }, 00:17:02.665 "memory_domains": [ 00:17:02.665 { 00:17:02.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.665 "dma_device_type": 2 00:17:02.665 } 00:17:02.665 ], 00:17:02.665 "driver_specific": {} 00:17:02.665 } 00:17:02.665 ] 00:17:02.665 05:14:21 -- common/autotest_common.sh@895 -- # return 0 00:17:02.665 05:14:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:02.923 [2024-07-26 05:14:21.898727] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.923 [2024-07-26 05:14:21.900742] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.923 [2024-07-26 05:14:21.901268] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.923 [2024-07-26 05:14:21.901293] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:02.923 [2024-07-26 05:14:21.901397] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.923 05:14:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.181 05:14:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.181 "name": "Existed_Raid", 00:17:03.181 "uuid": "03bfcf43-951a-4d1b-9a69-726ad2469bc8", 00:17:03.181 "strip_size_kb": 64, 00:17:03.181 "state": "configuring", 00:17:03.181 "raid_level": "concat", 00:17:03.181 "superblock": true, 00:17:03.181 "num_base_bdevs": 3, 00:17:03.181 "num_base_bdevs_discovered": 1, 00:17:03.181 "num_base_bdevs_operational": 3, 00:17:03.181 "base_bdevs_list": [ 00:17:03.181 { 00:17:03.181 "name": "BaseBdev1", 00:17:03.181 "uuid": "611aaac3-7bc0-4f90-b168-d13397dd81e1", 00:17:03.181 "is_configured": true, 00:17:03.181 "data_offset": 2048, 00:17:03.181 "data_size": 63488 00:17:03.181 }, 00:17:03.181 { 00:17:03.181 "name": "BaseBdev2", 00:17:03.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.181 "is_configured": false, 00:17:03.181 "data_offset": 0, 00:17:03.181 "data_size": 0 00:17:03.181 }, 00:17:03.181 { 00:17:03.181 "name": "BaseBdev3", 00:17:03.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.181 "is_configured": false, 00:17:03.181 "data_offset": 0, 00:17:03.181 "data_size": 0 00:17:03.181 } 00:17:03.181 ] 00:17:03.181 }' 00:17:03.181 05:14:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.181 05:14:22 -- common/autotest_common.sh@10 -- # set +x 00:17:03.439 05:14:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:03.698 [2024-07-26 05:14:22.750125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.698 BaseBdev2 00:17:03.698 05:14:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:03.698 05:14:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:03.698 05:14:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:03.698 05:14:22 -- common/autotest_common.sh@889 -- # local i 00:17:03.698 05:14:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:03.698 05:14:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:03.698 05:14:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:03.956 05:14:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:04.213 [ 00:17:04.213 { 00:17:04.213 "name": "BaseBdev2", 00:17:04.213 "aliases": [ 00:17:04.213 "a5db23c6-3e25-4de8-be05-8ccc0ac56e13" 00:17:04.213 ], 00:17:04.213 "product_name": "Malloc disk", 00:17:04.213 "block_size": 512, 00:17:04.213 "num_blocks": 65536, 00:17:04.213 "uuid": "a5db23c6-3e25-4de8-be05-8ccc0ac56e13", 00:17:04.213 "assigned_rate_limits": { 00:17:04.213 "rw_ios_per_sec": 0, 00:17:04.213 "rw_mbytes_per_sec": 0, 00:17:04.213 "r_mbytes_per_sec": 0, 00:17:04.213 "w_mbytes_per_sec": 0 00:17:04.213 }, 00:17:04.213 "claimed": true, 00:17:04.213 "claim_type": "exclusive_write", 00:17:04.213 "zoned": false, 00:17:04.213 "supported_io_types": { 00:17:04.213 "read": true, 00:17:04.213 "write": true, 00:17:04.214 "unmap": true, 00:17:04.214 "write_zeroes": true, 00:17:04.214 "flush": true, 00:17:04.214 "reset": true, 00:17:04.214 "compare": false, 00:17:04.214 "compare_and_write": false, 00:17:04.214 "abort": true, 00:17:04.214 "nvme_admin": false, 00:17:04.214 "nvme_io": false 00:17:04.214 }, 00:17:04.214 "memory_domains": [ 00:17:04.214 { 00:17:04.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.214 "dma_device_type": 2 00:17:04.214 } 00:17:04.214 ], 00:17:04.214 "driver_specific": {} 00:17:04.214 } 00:17:04.214 ] 00:17:04.214 05:14:23 -- common/autotest_common.sh@895 -- # return 0 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.214 05:14:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.483 05:14:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:04.483 "name": "Existed_Raid", 00:17:04.483 "uuid": "03bfcf43-951a-4d1b-9a69-726ad2469bc8", 00:17:04.483 "strip_size_kb": 64, 00:17:04.483 "state": "configuring", 00:17:04.483 "raid_level": "concat", 00:17:04.483 "superblock": true, 00:17:04.483 "num_base_bdevs": 3, 00:17:04.483 "num_base_bdevs_discovered": 2, 00:17:04.483 "num_base_bdevs_operational": 3, 00:17:04.483 "base_bdevs_list": [ 00:17:04.483 { 00:17:04.483 "name": "BaseBdev1", 00:17:04.483 "uuid": "611aaac3-7bc0-4f90-b168-d13397dd81e1", 00:17:04.483 "is_configured": true, 00:17:04.483 "data_offset": 2048, 00:17:04.483 "data_size": 63488 00:17:04.483 }, 00:17:04.483 { 00:17:04.483 "name": "BaseBdev2", 00:17:04.483 "uuid": "a5db23c6-3e25-4de8-be05-8ccc0ac56e13", 00:17:04.483 "is_configured": true, 00:17:04.483 "data_offset": 2048, 00:17:04.483 "data_size": 63488 00:17:04.483 }, 00:17:04.483 { 00:17:04.483 "name": "BaseBdev3", 00:17:04.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.483 "is_configured": false, 00:17:04.483 "data_offset": 0, 00:17:04.483 "data_size": 0 00:17:04.483 } 00:17:04.483 ] 00:17:04.483 }' 00:17:04.483 05:14:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:04.483 05:14:23 -- common/autotest_common.sh@10 -- # set +x 00:17:04.746 05:14:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:05.003 [2024-07-26 05:14:23.931169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.003 [2024-07-26 05:14:23.931410] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:17:05.003 [2024-07-26 05:14:23.931465] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:05.003 [2024-07-26 05:14:23.931577] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:05.003 [2024-07-26 05:14:23.931972] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:17:05.003 [2024-07-26 05:14:23.931990] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:17:05.003 [2024-07-26 05:14:23.932189] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.003 BaseBdev3 00:17:05.003 05:14:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:05.003 05:14:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:05.003 05:14:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:05.003 05:14:23 -- common/autotest_common.sh@889 -- # local i 00:17:05.003 05:14:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:05.003 05:14:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:05.003 05:14:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.270 05:14:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:05.551 [ 00:17:05.551 { 00:17:05.551 "name": "BaseBdev3", 00:17:05.551 "aliases": [ 00:17:05.551 "94680fab-ab93-4383-90f2-d66bdfbfb361" 00:17:05.551 ], 00:17:05.551 "product_name": "Malloc disk", 00:17:05.551 "block_size": 512, 00:17:05.551 "num_blocks": 65536, 00:17:05.551 "uuid": "94680fab-ab93-4383-90f2-d66bdfbfb361", 00:17:05.551 "assigned_rate_limits": { 00:17:05.551 "rw_ios_per_sec": 0, 00:17:05.551 "rw_mbytes_per_sec": 0, 00:17:05.551 "r_mbytes_per_sec": 0, 00:17:05.551 "w_mbytes_per_sec": 0 00:17:05.551 }, 00:17:05.551 "claimed": true, 00:17:05.551 "claim_type": "exclusive_write", 00:17:05.551 "zoned": false, 00:17:05.551 "supported_io_types": { 00:17:05.551 "read": true, 00:17:05.551 "write": true, 00:17:05.551 "unmap": true, 00:17:05.551 "write_zeroes": true, 00:17:05.551 "flush": true, 00:17:05.551 "reset": true, 00:17:05.551 "compare": false, 00:17:05.551 "compare_and_write": false, 00:17:05.551 "abort": true, 00:17:05.551 "nvme_admin": false, 00:17:05.551 "nvme_io": false 00:17:05.551 }, 00:17:05.551 "memory_domains": [ 00:17:05.551 { 00:17:05.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.551 "dma_device_type": 2 00:17:05.551 } 00:17:05.551 ], 00:17:05.551 "driver_specific": {} 00:17:05.551 } 00:17:05.551 ] 00:17:05.551 05:14:24 -- common/autotest_common.sh@895 -- # return 0 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.551 05:14:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.551 "name": "Existed_Raid", 00:17:05.551 "uuid": "03bfcf43-951a-4d1b-9a69-726ad2469bc8", 00:17:05.552 "strip_size_kb": 64, 00:17:05.552 "state": "online", 00:17:05.552 "raid_level": "concat", 00:17:05.552 "superblock": true, 00:17:05.552 "num_base_bdevs": 3, 00:17:05.552 "num_base_bdevs_discovered": 3, 00:17:05.552 "num_base_bdevs_operational": 3, 00:17:05.552 "base_bdevs_list": [ 00:17:05.552 { 00:17:05.552 "name": "BaseBdev1", 00:17:05.552 "uuid": "611aaac3-7bc0-4f90-b168-d13397dd81e1", 00:17:05.552 "is_configured": true, 00:17:05.552 "data_offset": 2048, 00:17:05.552 "data_size": 63488 00:17:05.552 }, 00:17:05.552 { 00:17:05.552 "name": "BaseBdev2", 00:17:05.552 "uuid": "a5db23c6-3e25-4de8-be05-8ccc0ac56e13", 00:17:05.552 "is_configured": true, 00:17:05.552 "data_offset": 2048, 00:17:05.552 "data_size": 63488 00:17:05.552 }, 00:17:05.552 { 00:17:05.552 "name": "BaseBdev3", 00:17:05.552 "uuid": "94680fab-ab93-4383-90f2-d66bdfbfb361", 00:17:05.552 "is_configured": true, 00:17:05.552 "data_offset": 2048, 00:17:05.552 "data_size": 63488 00:17:05.552 } 00:17:05.552 ] 00:17:05.552 }' 00:17:05.552 05:14:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.552 05:14:24 -- common/autotest_common.sh@10 -- # set +x 00:17:05.810 05:14:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:06.067 [2024-07-26 05:14:25.087715] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:06.067 [2024-07-26 05:14:25.087775] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.067 [2024-07-26 05:14:25.087838] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.325 05:14:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.583 05:14:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:06.583 "name": "Existed_Raid", 00:17:06.583 "uuid": "03bfcf43-951a-4d1b-9a69-726ad2469bc8", 00:17:06.583 "strip_size_kb": 64, 00:17:06.583 "state": "offline", 00:17:06.583 "raid_level": "concat", 00:17:06.583 "superblock": true, 00:17:06.583 "num_base_bdevs": 3, 00:17:06.583 "num_base_bdevs_discovered": 2, 00:17:06.583 "num_base_bdevs_operational": 2, 00:17:06.583 "base_bdevs_list": [ 00:17:06.583 { 00:17:06.583 "name": null, 00:17:06.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.583 "is_configured": false, 00:17:06.583 "data_offset": 2048, 00:17:06.583 "data_size": 63488 00:17:06.583 }, 00:17:06.583 { 00:17:06.583 "name": "BaseBdev2", 00:17:06.583 "uuid": "a5db23c6-3e25-4de8-be05-8ccc0ac56e13", 00:17:06.583 "is_configured": true, 00:17:06.583 "data_offset": 2048, 00:17:06.583 "data_size": 63488 00:17:06.583 }, 00:17:06.583 { 00:17:06.583 "name": "BaseBdev3", 00:17:06.583 "uuid": "94680fab-ab93-4383-90f2-d66bdfbfb361", 00:17:06.583 "is_configured": true, 00:17:06.583 "data_offset": 2048, 00:17:06.583 "data_size": 63488 00:17:06.583 } 00:17:06.583 ] 00:17:06.583 }' 00:17:06.583 05:14:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:06.583 05:14:25 -- common/autotest_common.sh@10 -- # set +x 00:17:06.841 05:14:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:06.841 05:14:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:06.841 05:14:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.841 05:14:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:07.098 05:14:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:07.098 05:14:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:07.098 05:14:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:07.355 [2024-07-26 05:14:26.224273] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:07.355 05:14:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:07.355 05:14:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:07.355 05:14:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.355 05:14:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:07.613 05:14:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:07.613 05:14:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:07.613 05:14:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:07.871 [2024-07-26 05:14:26.751659] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:07.871 [2024-07-26 05:14:26.751719] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:17:07.871 05:14:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:07.871 05:14:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:07.871 05:14:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.871 05:14:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:08.129 05:14:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:08.129 05:14:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:08.129 05:14:27 -- bdev/bdev_raid.sh@287 -- # killprocess 72634 00:17:08.129 05:14:27 -- common/autotest_common.sh@926 -- # '[' -z 72634 ']' 00:17:08.129 05:14:27 -- common/autotest_common.sh@930 -- # kill -0 72634 00:17:08.129 05:14:27 -- common/autotest_common.sh@931 -- # uname 00:17:08.129 05:14:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:08.129 05:14:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72634 00:17:08.129 killing process with pid 72634 00:17:08.129 05:14:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:08.129 05:14:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:08.129 05:14:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72634' 00:17:08.129 05:14:27 -- common/autotest_common.sh@945 -- # kill 72634 00:17:08.129 [2024-07-26 05:14:27.076688] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.129 05:14:27 -- common/autotest_common.sh@950 -- # wait 72634 00:17:08.129 [2024-07-26 05:14:27.076794] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:09.062 00:17:09.062 real 0m11.143s 00:17:09.062 user 0m18.619s 00:17:09.062 sys 0m1.574s 00:17:09.062 05:14:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.062 ************************************ 00:17:09.062 END TEST raid_state_function_test_sb 00:17:09.062 05:14:28 -- common/autotest_common.sh@10 -- # set +x 00:17:09.062 ************************************ 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:09.062 05:14:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:09.062 05:14:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:09.062 05:14:28 -- common/autotest_common.sh@10 -- # set +x 00:17:09.062 ************************************ 00:17:09.062 START TEST raid_superblock_test 00:17:09.062 ************************************ 00:17:09.062 05:14:28 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@357 -- # raid_pid=72984 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@358 -- # waitforlisten 72984 /var/tmp/spdk-raid.sock 00:17:09.062 05:14:28 -- common/autotest_common.sh@819 -- # '[' -z 72984 ']' 00:17:09.062 05:14:28 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:09.062 05:14:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:09.062 05:14:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:09.062 05:14:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:09.062 05:14:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.062 05:14:28 -- common/autotest_common.sh@10 -- # set +x 00:17:09.320 [2024-07-26 05:14:28.219348] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:09.320 [2024-07-26 05:14:28.219546] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72984 ] 00:17:09.320 [2024-07-26 05:14:28.389798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.577 [2024-07-26 05:14:28.562254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.835 [2024-07-26 05:14:28.727833] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.093 05:14:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.093 05:14:29 -- common/autotest_common.sh@852 -- # return 0 00:17:10.093 05:14:29 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:10.093 05:14:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:10.093 05:14:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:10.093 05:14:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:10.093 05:14:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:10.093 05:14:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:10.093 05:14:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:10.093 05:14:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:10.093 05:14:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:10.350 malloc1 00:17:10.350 05:14:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:10.608 [2024-07-26 05:14:29.608698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:10.608 [2024-07-26 05:14:29.608802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.608 [2024-07-26 05:14:29.608842] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:17:10.608 [2024-07-26 05:14:29.608857] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.608 [2024-07-26 05:14:29.611275] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.608 [2024-07-26 05:14:29.611333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:10.608 pt1 00:17:10.608 05:14:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:10.608 05:14:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:10.608 05:14:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:10.608 05:14:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:10.608 05:14:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:10.608 05:14:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:10.608 05:14:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:10.608 05:14:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:10.608 05:14:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:10.865 malloc2 00:17:10.865 05:14:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:11.122 [2024-07-26 05:14:30.061374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:11.122 [2024-07-26 05:14:30.061498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.122 [2024-07-26 05:14:30.061532] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:17:11.122 [2024-07-26 05:14:30.061547] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.122 [2024-07-26 05:14:30.064004] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.122 [2024-07-26 05:14:30.064106] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:11.122 pt2 00:17:11.122 05:14:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:11.122 05:14:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:11.122 05:14:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:11.122 05:14:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:11.122 05:14:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:11.122 05:14:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:11.122 05:14:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:11.122 05:14:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:11.122 05:14:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:11.379 malloc3 00:17:11.379 05:14:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:11.379 [2024-07-26 05:14:30.479824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:11.379 [2024-07-26 05:14:30.479936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.379 [2024-07-26 05:14:30.479969] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:17:11.379 [2024-07-26 05:14:30.479984] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.379 [2024-07-26 05:14:30.482597] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.379 [2024-07-26 05:14:30.482808] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:11.379 pt3 00:17:11.637 05:14:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:11.637 05:14:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:11.637 05:14:30 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:11.637 [2024-07-26 05:14:30.727950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:11.637 [2024-07-26 05:14:30.729997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:11.637 [2024-07-26 05:14:30.730151] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:11.637 [2024-07-26 05:14:30.730412] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:17:11.637 [2024-07-26 05:14:30.730439] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:11.637 [2024-07-26 05:14:30.730574] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:17:11.637 [2024-07-26 05:14:30.731000] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:17:11.637 [2024-07-26 05:14:30.731019] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:17:11.637 [2024-07-26 05:14:30.731239] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.895 "name": "raid_bdev1", 00:17:11.895 "uuid": "d195c247-13ef-4d64-bd08-5e47851df1bf", 00:17:11.895 "strip_size_kb": 64, 00:17:11.895 "state": "online", 00:17:11.895 "raid_level": "concat", 00:17:11.895 "superblock": true, 00:17:11.895 "num_base_bdevs": 3, 00:17:11.895 "num_base_bdevs_discovered": 3, 00:17:11.895 "num_base_bdevs_operational": 3, 00:17:11.895 "base_bdevs_list": [ 00:17:11.895 { 00:17:11.895 "name": "pt1", 00:17:11.895 "uuid": "da6442cc-8385-5929-ae71-9ef53ddbce5f", 00:17:11.895 "is_configured": true, 00:17:11.895 "data_offset": 2048, 00:17:11.895 "data_size": 63488 00:17:11.895 }, 00:17:11.895 { 00:17:11.895 "name": "pt2", 00:17:11.895 "uuid": "6678c862-8e50-5124-bd78-530bd8faeafd", 00:17:11.895 "is_configured": true, 00:17:11.895 "data_offset": 2048, 00:17:11.895 "data_size": 63488 00:17:11.895 }, 00:17:11.895 { 00:17:11.895 "name": "pt3", 00:17:11.895 "uuid": "0a35aae5-353b-58e8-bd27-30fd1ae0292f", 00:17:11.895 "is_configured": true, 00:17:11.895 "data_offset": 2048, 00:17:11.895 "data_size": 63488 00:17:11.895 } 00:17:11.895 ] 00:17:11.895 }' 00:17:11.895 05:14:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.895 05:14:30 -- common/autotest_common.sh@10 -- # set +x 00:17:12.461 05:14:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:12.461 05:14:31 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:12.461 [2024-07-26 05:14:31.532383] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.461 05:14:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d195c247-13ef-4d64-bd08-5e47851df1bf 00:17:12.461 05:14:31 -- bdev/bdev_raid.sh@380 -- # '[' -z d195c247-13ef-4d64-bd08-5e47851df1bf ']' 00:17:12.461 05:14:31 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:12.719 [2024-07-26 05:14:31.748176] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.719 [2024-07-26 05:14:31.748218] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.719 [2024-07-26 05:14:31.748303] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.719 [2024-07-26 05:14:31.748372] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.719 [2024-07-26 05:14:31.748391] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:17:12.719 05:14:31 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:12.719 05:14:31 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.977 05:14:32 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:12.977 05:14:32 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:12.977 05:14:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:12.977 05:14:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:13.235 05:14:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.235 05:14:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:13.571 05:14:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.571 05:14:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:13.850 05:14:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:13.850 05:14:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:13.850 05:14:32 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:13.850 05:14:32 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:13.850 05:14:32 -- common/autotest_common.sh@640 -- # local es=0 00:17:13.850 05:14:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:13.850 05:14:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.850 05:14:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:13.850 05:14:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.108 05:14:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:14.108 05:14:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.108 05:14:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:14.108 05:14:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.108 05:14:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:14.108 05:14:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:14.108 [2024-07-26 05:14:33.152553] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:14.108 [2024-07-26 05:14:33.154761] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:14.108 [2024-07-26 05:14:33.154838] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:14.108 [2024-07-26 05:14:33.154901] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:14.108 [2024-07-26 05:14:33.154982] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:14.108 [2024-07-26 05:14:33.155060] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:14.108 [2024-07-26 05:14:33.155100] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.108 [2024-07-26 05:14:33.155116] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:17:14.108 request: 00:17:14.108 { 00:17:14.108 "name": "raid_bdev1", 00:17:14.108 "raid_level": "concat", 00:17:14.108 "base_bdevs": [ 00:17:14.108 "malloc1", 00:17:14.108 "malloc2", 00:17:14.108 "malloc3" 00:17:14.108 ], 00:17:14.108 "superblock": false, 00:17:14.108 "strip_size_kb": 64, 00:17:14.108 "method": "bdev_raid_create", 00:17:14.108 "req_id": 1 00:17:14.108 } 00:17:14.108 Got JSON-RPC error response 00:17:14.108 response: 00:17:14.108 { 00:17:14.108 "code": -17, 00:17:14.108 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:14.108 } 00:17:14.108 05:14:33 -- common/autotest_common.sh@643 -- # es=1 00:17:14.108 05:14:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:14.108 05:14:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:14.108 05:14:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:14.108 05:14:33 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.109 05:14:33 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:14.367 05:14:33 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:14.367 05:14:33 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:14.367 05:14:33 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:14.625 [2024-07-26 05:14:33.620642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:14.625 [2024-07-26 05:14:33.620749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.625 [2024-07-26 05:14:33.620776] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:17:14.625 [2024-07-26 05:14:33.620792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.625 [2024-07-26 05:14:33.623379] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.625 [2024-07-26 05:14:33.623442] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:14.625 [2024-07-26 05:14:33.623560] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:14.625 [2024-07-26 05:14:33.623627] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:14.625 pt1 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.625 05:14:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.884 05:14:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.884 "name": "raid_bdev1", 00:17:14.884 "uuid": "d195c247-13ef-4d64-bd08-5e47851df1bf", 00:17:14.884 "strip_size_kb": 64, 00:17:14.884 "state": "configuring", 00:17:14.884 "raid_level": "concat", 00:17:14.884 "superblock": true, 00:17:14.884 "num_base_bdevs": 3, 00:17:14.884 "num_base_bdevs_discovered": 1, 00:17:14.884 "num_base_bdevs_operational": 3, 00:17:14.884 "base_bdevs_list": [ 00:17:14.884 { 00:17:14.884 "name": "pt1", 00:17:14.884 "uuid": "da6442cc-8385-5929-ae71-9ef53ddbce5f", 00:17:14.884 "is_configured": true, 00:17:14.884 "data_offset": 2048, 00:17:14.884 "data_size": 63488 00:17:14.884 }, 00:17:14.884 { 00:17:14.884 "name": null, 00:17:14.884 "uuid": "6678c862-8e50-5124-bd78-530bd8faeafd", 00:17:14.884 "is_configured": false, 00:17:14.884 "data_offset": 2048, 00:17:14.884 "data_size": 63488 00:17:14.884 }, 00:17:14.884 { 00:17:14.884 "name": null, 00:17:14.884 "uuid": "0a35aae5-353b-58e8-bd27-30fd1ae0292f", 00:17:14.884 "is_configured": false, 00:17:14.884 "data_offset": 2048, 00:17:14.884 "data_size": 63488 00:17:14.884 } 00:17:14.884 ] 00:17:14.884 }' 00:17:14.884 05:14:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.884 05:14:33 -- common/autotest_common.sh@10 -- # set +x 00:17:15.142 05:14:34 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:15.142 05:14:34 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.400 [2024-07-26 05:14:34.400828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.400 [2024-07-26 05:14:34.400917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.400 [2024-07-26 05:14:34.400948] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:17:15.400 [2024-07-26 05:14:34.400964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.400 [2024-07-26 05:14:34.401539] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.400 [2024-07-26 05:14:34.401584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.400 [2024-07-26 05:14:34.401683] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:15.400 [2024-07-26 05:14:34.401716] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.400 pt2 00:17:15.400 05:14:34 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:15.657 [2024-07-26 05:14:34.616877] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.658 05:14:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.916 05:14:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:15.916 "name": "raid_bdev1", 00:17:15.916 "uuid": "d195c247-13ef-4d64-bd08-5e47851df1bf", 00:17:15.916 "strip_size_kb": 64, 00:17:15.916 "state": "configuring", 00:17:15.916 "raid_level": "concat", 00:17:15.916 "superblock": true, 00:17:15.916 "num_base_bdevs": 3, 00:17:15.916 "num_base_bdevs_discovered": 1, 00:17:15.916 "num_base_bdevs_operational": 3, 00:17:15.916 "base_bdevs_list": [ 00:17:15.916 { 00:17:15.916 "name": "pt1", 00:17:15.916 "uuid": "da6442cc-8385-5929-ae71-9ef53ddbce5f", 00:17:15.916 "is_configured": true, 00:17:15.916 "data_offset": 2048, 00:17:15.916 "data_size": 63488 00:17:15.916 }, 00:17:15.916 { 00:17:15.916 "name": null, 00:17:15.916 "uuid": "6678c862-8e50-5124-bd78-530bd8faeafd", 00:17:15.916 "is_configured": false, 00:17:15.916 "data_offset": 2048, 00:17:15.916 "data_size": 63488 00:17:15.916 }, 00:17:15.916 { 00:17:15.916 "name": null, 00:17:15.916 "uuid": "0a35aae5-353b-58e8-bd27-30fd1ae0292f", 00:17:15.916 "is_configured": false, 00:17:15.916 "data_offset": 2048, 00:17:15.916 "data_size": 63488 00:17:15.916 } 00:17:15.916 ] 00:17:15.916 }' 00:17:15.916 05:14:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:15.916 05:14:34 -- common/autotest_common.sh@10 -- # set +x 00:17:16.175 05:14:35 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:16.175 05:14:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:16.175 05:14:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.433 [2024-07-26 05:14:35.397139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.433 [2024-07-26 05:14:35.397253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.433 [2024-07-26 05:14:35.397302] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:17:16.433 [2024-07-26 05:14:35.397315] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.433 [2024-07-26 05:14:35.397876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.433 [2024-07-26 05:14:35.397950] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.433 [2024-07-26 05:14:35.398117] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:16.433 [2024-07-26 05:14:35.398148] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.433 pt2 00:17:16.433 05:14:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:16.433 05:14:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:16.433 05:14:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.691 [2024-07-26 05:14:35.657206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.691 [2024-07-26 05:14:35.657307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.691 [2024-07-26 05:14:35.657353] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:17:16.691 [2024-07-26 05:14:35.657367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.691 [2024-07-26 05:14:35.657907] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.691 [2024-07-26 05:14:35.657947] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.691 [2024-07-26 05:14:35.658107] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:16.691 [2024-07-26 05:14:35.658139] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.691 [2024-07-26 05:14:35.658297] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:17:16.691 [2024-07-26 05:14:35.658313] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:16.691 [2024-07-26 05:14:35.658440] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:16.691 [2024-07-26 05:14:35.658803] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:17:16.691 [2024-07-26 05:14:35.658834] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:17:16.691 [2024-07-26 05:14:35.658977] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.691 pt3 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.691 05:14:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.949 05:14:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:16.949 "name": "raid_bdev1", 00:17:16.949 "uuid": "d195c247-13ef-4d64-bd08-5e47851df1bf", 00:17:16.949 "strip_size_kb": 64, 00:17:16.949 "state": "online", 00:17:16.949 "raid_level": "concat", 00:17:16.949 "superblock": true, 00:17:16.949 "num_base_bdevs": 3, 00:17:16.949 "num_base_bdevs_discovered": 3, 00:17:16.949 "num_base_bdevs_operational": 3, 00:17:16.949 "base_bdevs_list": [ 00:17:16.949 { 00:17:16.949 "name": "pt1", 00:17:16.949 "uuid": "da6442cc-8385-5929-ae71-9ef53ddbce5f", 00:17:16.949 "is_configured": true, 00:17:16.949 "data_offset": 2048, 00:17:16.949 "data_size": 63488 00:17:16.949 }, 00:17:16.949 { 00:17:16.949 "name": "pt2", 00:17:16.949 "uuid": "6678c862-8e50-5124-bd78-530bd8faeafd", 00:17:16.949 "is_configured": true, 00:17:16.949 "data_offset": 2048, 00:17:16.949 "data_size": 63488 00:17:16.949 }, 00:17:16.949 { 00:17:16.949 "name": "pt3", 00:17:16.949 "uuid": "0a35aae5-353b-58e8-bd27-30fd1ae0292f", 00:17:16.949 "is_configured": true, 00:17:16.949 "data_offset": 2048, 00:17:16.949 "data_size": 63488 00:17:16.949 } 00:17:16.949 ] 00:17:16.949 }' 00:17:16.949 05:14:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:16.949 05:14:35 -- common/autotest_common.sh@10 -- # set +x 00:17:17.206 05:14:36 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:17.206 05:14:36 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:17.464 [2024-07-26 05:14:36.449672] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.464 05:14:36 -- bdev/bdev_raid.sh@430 -- # '[' d195c247-13ef-4d64-bd08-5e47851df1bf '!=' d195c247-13ef-4d64-bd08-5e47851df1bf ']' 00:17:17.464 05:14:36 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:17.464 05:14:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:17.464 05:14:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:17.464 05:14:36 -- bdev/bdev_raid.sh@511 -- # killprocess 72984 00:17:17.464 05:14:36 -- common/autotest_common.sh@926 -- # '[' -z 72984 ']' 00:17:17.464 05:14:36 -- common/autotest_common.sh@930 -- # kill -0 72984 00:17:17.464 05:14:36 -- common/autotest_common.sh@931 -- # uname 00:17:17.464 05:14:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:17.464 05:14:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72984 00:17:17.464 killing process with pid 72984 00:17:17.464 05:14:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:17.464 05:14:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:17.464 05:14:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72984' 00:17:17.464 05:14:36 -- common/autotest_common.sh@945 -- # kill 72984 00:17:17.464 [2024-07-26 05:14:36.501114] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.464 [2024-07-26 05:14:36.501199] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.464 05:14:36 -- common/autotest_common.sh@950 -- # wait 72984 00:17:17.464 [2024-07-26 05:14:36.501265] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.464 [2024-07-26 05:14:36.501283] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:17:17.723 [2024-07-26 05:14:36.724601] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.656 ************************************ 00:17:18.656 END TEST raid_superblock_test 00:17:18.656 ************************************ 00:17:18.656 05:14:37 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:18.656 00:17:18.656 real 0m9.596s 00:17:18.656 user 0m15.824s 00:17:18.656 sys 0m1.365s 00:17:18.656 05:14:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.656 05:14:37 -- common/autotest_common.sh@10 -- # set +x 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:17:18.914 05:14:37 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:18.914 05:14:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:18.914 05:14:37 -- common/autotest_common.sh@10 -- # set +x 00:17:18.914 ************************************ 00:17:18.914 START TEST raid_state_function_test 00:17:18.914 ************************************ 00:17:18.914 05:14:37 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:18.914 Process raid pid: 73270 00:17:18.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=73270 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 73270' 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 73270 /var/tmp/spdk-raid.sock 00:17:18.914 05:14:37 -- common/autotest_common.sh@819 -- # '[' -z 73270 ']' 00:17:18.914 05:14:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:18.914 05:14:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:18.914 05:14:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:18.914 05:14:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:18.914 05:14:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:18.914 05:14:37 -- common/autotest_common.sh@10 -- # set +x 00:17:18.914 [2024-07-26 05:14:37.872159] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:18.914 [2024-07-26 05:14:37.872523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.172 [2024-07-26 05:14:38.043785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.172 [2024-07-26 05:14:38.208627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.429 [2024-07-26 05:14:38.367776] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.687 05:14:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:19.687 05:14:38 -- common/autotest_common.sh@852 -- # return 0 00:17:19.687 05:14:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:19.945 [2024-07-26 05:14:38.961656] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.945 [2024-07-26 05:14:38.961732] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.945 [2024-07-26 05:14:38.961748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.945 [2024-07-26 05:14:38.961762] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.945 [2024-07-26 05:14:38.961771] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.945 [2024-07-26 05:14:38.961783] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.945 05:14:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.203 05:14:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.203 "name": "Existed_Raid", 00:17:20.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.203 "strip_size_kb": 0, 00:17:20.203 "state": "configuring", 00:17:20.203 "raid_level": "raid1", 00:17:20.203 "superblock": false, 00:17:20.203 "num_base_bdevs": 3, 00:17:20.203 "num_base_bdevs_discovered": 0, 00:17:20.203 "num_base_bdevs_operational": 3, 00:17:20.203 "base_bdevs_list": [ 00:17:20.203 { 00:17:20.203 "name": "BaseBdev1", 00:17:20.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.203 "is_configured": false, 00:17:20.203 "data_offset": 0, 00:17:20.203 "data_size": 0 00:17:20.203 }, 00:17:20.203 { 00:17:20.203 "name": "BaseBdev2", 00:17:20.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.203 "is_configured": false, 00:17:20.203 "data_offset": 0, 00:17:20.203 "data_size": 0 00:17:20.203 }, 00:17:20.203 { 00:17:20.203 "name": "BaseBdev3", 00:17:20.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.203 "is_configured": false, 00:17:20.203 "data_offset": 0, 00:17:20.203 "data_size": 0 00:17:20.203 } 00:17:20.203 ] 00:17:20.203 }' 00:17:20.203 05:14:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.203 05:14:39 -- common/autotest_common.sh@10 -- # set +x 00:17:20.461 05:14:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:20.719 [2024-07-26 05:14:39.745754] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:20.719 [2024-07-26 05:14:39.745799] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:20.719 05:14:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:20.976 [2024-07-26 05:14:40.001872] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:20.976 [2024-07-26 05:14:40.001946] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:20.977 [2024-07-26 05:14:40.001962] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.977 [2024-07-26 05:14:40.001981] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.977 [2024-07-26 05:14:40.002022] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.977 [2024-07-26 05:14:40.002041] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.977 05:14:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:21.233 [2024-07-26 05:14:40.246711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.233 BaseBdev1 00:17:21.233 05:14:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:21.233 05:14:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:21.233 05:14:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:21.233 05:14:40 -- common/autotest_common.sh@889 -- # local i 00:17:21.233 05:14:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:21.233 05:14:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:21.233 05:14:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.492 05:14:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.752 [ 00:17:21.752 { 00:17:21.752 "name": "BaseBdev1", 00:17:21.752 "aliases": [ 00:17:21.752 "da72ac88-9a78-4689-b64b-ddcf62db34f2" 00:17:21.752 ], 00:17:21.752 "product_name": "Malloc disk", 00:17:21.752 "block_size": 512, 00:17:21.752 "num_blocks": 65536, 00:17:21.752 "uuid": "da72ac88-9a78-4689-b64b-ddcf62db34f2", 00:17:21.752 "assigned_rate_limits": { 00:17:21.752 "rw_ios_per_sec": 0, 00:17:21.752 "rw_mbytes_per_sec": 0, 00:17:21.752 "r_mbytes_per_sec": 0, 00:17:21.752 "w_mbytes_per_sec": 0 00:17:21.752 }, 00:17:21.752 "claimed": true, 00:17:21.752 "claim_type": "exclusive_write", 00:17:21.752 "zoned": false, 00:17:21.752 "supported_io_types": { 00:17:21.752 "read": true, 00:17:21.752 "write": true, 00:17:21.752 "unmap": true, 00:17:21.752 "write_zeroes": true, 00:17:21.752 "flush": true, 00:17:21.752 "reset": true, 00:17:21.752 "compare": false, 00:17:21.752 "compare_and_write": false, 00:17:21.752 "abort": true, 00:17:21.752 "nvme_admin": false, 00:17:21.752 "nvme_io": false 00:17:21.752 }, 00:17:21.752 "memory_domains": [ 00:17:21.752 { 00:17:21.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.752 "dma_device_type": 2 00:17:21.752 } 00:17:21.752 ], 00:17:21.752 "driver_specific": {} 00:17:21.752 } 00:17:21.752 ] 00:17:21.752 05:14:40 -- common/autotest_common.sh@895 -- # return 0 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.752 05:14:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.010 05:14:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.010 "name": "Existed_Raid", 00:17:22.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.010 "strip_size_kb": 0, 00:17:22.010 "state": "configuring", 00:17:22.010 "raid_level": "raid1", 00:17:22.010 "superblock": false, 00:17:22.010 "num_base_bdevs": 3, 00:17:22.010 "num_base_bdevs_discovered": 1, 00:17:22.010 "num_base_bdevs_operational": 3, 00:17:22.010 "base_bdevs_list": [ 00:17:22.010 { 00:17:22.010 "name": "BaseBdev1", 00:17:22.010 "uuid": "da72ac88-9a78-4689-b64b-ddcf62db34f2", 00:17:22.010 "is_configured": true, 00:17:22.010 "data_offset": 0, 00:17:22.010 "data_size": 65536 00:17:22.010 }, 00:17:22.010 { 00:17:22.010 "name": "BaseBdev2", 00:17:22.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.010 "is_configured": false, 00:17:22.010 "data_offset": 0, 00:17:22.010 "data_size": 0 00:17:22.010 }, 00:17:22.010 { 00:17:22.010 "name": "BaseBdev3", 00:17:22.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.010 "is_configured": false, 00:17:22.010 "data_offset": 0, 00:17:22.010 "data_size": 0 00:17:22.010 } 00:17:22.010 ] 00:17:22.010 }' 00:17:22.010 05:14:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.010 05:14:40 -- common/autotest_common.sh@10 -- # set +x 00:17:22.268 05:14:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:22.526 [2024-07-26 05:14:41.547084] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:22.526 [2024-07-26 05:14:41.547156] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:22.526 05:14:41 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:22.526 05:14:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:22.784 [2024-07-26 05:14:41.755211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.784 [2024-07-26 05:14:41.757230] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.784 [2024-07-26 05:14:41.757298] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.784 [2024-07-26 05:14:41.757313] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:22.784 [2024-07-26 05:14:41.757327] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.784 05:14:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.041 05:14:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.042 "name": "Existed_Raid", 00:17:23.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.042 "strip_size_kb": 0, 00:17:23.042 "state": "configuring", 00:17:23.042 "raid_level": "raid1", 00:17:23.042 "superblock": false, 00:17:23.042 "num_base_bdevs": 3, 00:17:23.042 "num_base_bdevs_discovered": 1, 00:17:23.042 "num_base_bdevs_operational": 3, 00:17:23.042 "base_bdevs_list": [ 00:17:23.042 { 00:17:23.042 "name": "BaseBdev1", 00:17:23.042 "uuid": "da72ac88-9a78-4689-b64b-ddcf62db34f2", 00:17:23.042 "is_configured": true, 00:17:23.042 "data_offset": 0, 00:17:23.042 "data_size": 65536 00:17:23.042 }, 00:17:23.042 { 00:17:23.042 "name": "BaseBdev2", 00:17:23.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.042 "is_configured": false, 00:17:23.042 "data_offset": 0, 00:17:23.042 "data_size": 0 00:17:23.042 }, 00:17:23.042 { 00:17:23.042 "name": "BaseBdev3", 00:17:23.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.042 "is_configured": false, 00:17:23.042 "data_offset": 0, 00:17:23.042 "data_size": 0 00:17:23.042 } 00:17:23.042 ] 00:17:23.042 }' 00:17:23.042 05:14:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.042 05:14:41 -- common/autotest_common.sh@10 -- # set +x 00:17:23.299 05:14:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:23.557 [2024-07-26 05:14:42.544462] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.557 BaseBdev2 00:17:23.557 05:14:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:23.557 05:14:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:23.557 05:14:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:23.557 05:14:42 -- common/autotest_common.sh@889 -- # local i 00:17:23.557 05:14:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:23.557 05:14:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:23.557 05:14:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:23.815 05:14:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:24.073 [ 00:17:24.073 { 00:17:24.073 "name": "BaseBdev2", 00:17:24.073 "aliases": [ 00:17:24.073 "378ecbd9-2da5-40df-91b8-090fdf1041f9" 00:17:24.073 ], 00:17:24.073 "product_name": "Malloc disk", 00:17:24.073 "block_size": 512, 00:17:24.073 "num_blocks": 65536, 00:17:24.073 "uuid": "378ecbd9-2da5-40df-91b8-090fdf1041f9", 00:17:24.073 "assigned_rate_limits": { 00:17:24.073 "rw_ios_per_sec": 0, 00:17:24.073 "rw_mbytes_per_sec": 0, 00:17:24.073 "r_mbytes_per_sec": 0, 00:17:24.073 "w_mbytes_per_sec": 0 00:17:24.073 }, 00:17:24.073 "claimed": true, 00:17:24.073 "claim_type": "exclusive_write", 00:17:24.073 "zoned": false, 00:17:24.073 "supported_io_types": { 00:17:24.073 "read": true, 00:17:24.073 "write": true, 00:17:24.073 "unmap": true, 00:17:24.073 "write_zeroes": true, 00:17:24.073 "flush": true, 00:17:24.073 "reset": true, 00:17:24.073 "compare": false, 00:17:24.073 "compare_and_write": false, 00:17:24.073 "abort": true, 00:17:24.073 "nvme_admin": false, 00:17:24.073 "nvme_io": false 00:17:24.073 }, 00:17:24.073 "memory_domains": [ 00:17:24.073 { 00:17:24.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.073 "dma_device_type": 2 00:17:24.073 } 00:17:24.073 ], 00:17:24.073 "driver_specific": {} 00:17:24.073 } 00:17:24.073 ] 00:17:24.073 05:14:43 -- common/autotest_common.sh@895 -- # return 0 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.073 05:14:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.330 05:14:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.330 "name": "Existed_Raid", 00:17:24.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.331 "strip_size_kb": 0, 00:17:24.331 "state": "configuring", 00:17:24.331 "raid_level": "raid1", 00:17:24.331 "superblock": false, 00:17:24.331 "num_base_bdevs": 3, 00:17:24.331 "num_base_bdevs_discovered": 2, 00:17:24.331 "num_base_bdevs_operational": 3, 00:17:24.331 "base_bdevs_list": [ 00:17:24.331 { 00:17:24.331 "name": "BaseBdev1", 00:17:24.331 "uuid": "da72ac88-9a78-4689-b64b-ddcf62db34f2", 00:17:24.331 "is_configured": true, 00:17:24.331 "data_offset": 0, 00:17:24.331 "data_size": 65536 00:17:24.331 }, 00:17:24.331 { 00:17:24.331 "name": "BaseBdev2", 00:17:24.331 "uuid": "378ecbd9-2da5-40df-91b8-090fdf1041f9", 00:17:24.331 "is_configured": true, 00:17:24.331 "data_offset": 0, 00:17:24.331 "data_size": 65536 00:17:24.331 }, 00:17:24.331 { 00:17:24.331 "name": "BaseBdev3", 00:17:24.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.331 "is_configured": false, 00:17:24.331 "data_offset": 0, 00:17:24.331 "data_size": 0 00:17:24.331 } 00:17:24.331 ] 00:17:24.331 }' 00:17:24.331 05:14:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.331 05:14:43 -- common/autotest_common.sh@10 -- # set +x 00:17:24.588 05:14:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:24.846 [2024-07-26 05:14:43.781342] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.846 [2024-07-26 05:14:43.781590] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:17:24.846 [2024-07-26 05:14:43.781648] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:24.846 [2024-07-26 05:14:43.781865] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:17:24.846 [2024-07-26 05:14:43.782521] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:17:24.846 [2024-07-26 05:14:43.782683] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:17:24.846 [2024-07-26 05:14:43.783243] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.846 BaseBdev3 00:17:24.846 05:14:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:24.846 05:14:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:24.846 05:14:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:24.846 05:14:43 -- common/autotest_common.sh@889 -- # local i 00:17:24.846 05:14:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:24.846 05:14:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:24.846 05:14:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.104 05:14:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:25.104 [ 00:17:25.104 { 00:17:25.104 "name": "BaseBdev3", 00:17:25.104 "aliases": [ 00:17:25.104 "bd5425b5-5e26-4efd-8c5a-6925863be764" 00:17:25.104 ], 00:17:25.104 "product_name": "Malloc disk", 00:17:25.104 "block_size": 512, 00:17:25.104 "num_blocks": 65536, 00:17:25.104 "uuid": "bd5425b5-5e26-4efd-8c5a-6925863be764", 00:17:25.104 "assigned_rate_limits": { 00:17:25.104 "rw_ios_per_sec": 0, 00:17:25.104 "rw_mbytes_per_sec": 0, 00:17:25.104 "r_mbytes_per_sec": 0, 00:17:25.104 "w_mbytes_per_sec": 0 00:17:25.104 }, 00:17:25.104 "claimed": true, 00:17:25.104 "claim_type": "exclusive_write", 00:17:25.104 "zoned": false, 00:17:25.104 "supported_io_types": { 00:17:25.104 "read": true, 00:17:25.104 "write": true, 00:17:25.104 "unmap": true, 00:17:25.104 "write_zeroes": true, 00:17:25.104 "flush": true, 00:17:25.104 "reset": true, 00:17:25.104 "compare": false, 00:17:25.104 "compare_and_write": false, 00:17:25.104 "abort": true, 00:17:25.104 "nvme_admin": false, 00:17:25.104 "nvme_io": false 00:17:25.104 }, 00:17:25.104 "memory_domains": [ 00:17:25.104 { 00:17:25.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.104 "dma_device_type": 2 00:17:25.104 } 00:17:25.104 ], 00:17:25.104 "driver_specific": {} 00:17:25.104 } 00:17:25.104 ] 00:17:25.104 05:14:44 -- common/autotest_common.sh@895 -- # return 0 00:17:25.104 05:14:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:25.104 05:14:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:25.104 05:14:44 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:25.104 05:14:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.104 05:14:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:25.104 05:14:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:25.104 05:14:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:25.104 05:14:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:25.104 05:14:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.104 05:14:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.104 05:14:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.105 05:14:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.105 05:14:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.105 05:14:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.362 05:14:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.362 "name": "Existed_Raid", 00:17:25.362 "uuid": "a3031442-d545-4313-878a-1fbdc8ee5dea", 00:17:25.362 "strip_size_kb": 0, 00:17:25.362 "state": "online", 00:17:25.362 "raid_level": "raid1", 00:17:25.362 "superblock": false, 00:17:25.362 "num_base_bdevs": 3, 00:17:25.362 "num_base_bdevs_discovered": 3, 00:17:25.362 "num_base_bdevs_operational": 3, 00:17:25.362 "base_bdevs_list": [ 00:17:25.362 { 00:17:25.362 "name": "BaseBdev1", 00:17:25.362 "uuid": "da72ac88-9a78-4689-b64b-ddcf62db34f2", 00:17:25.362 "is_configured": true, 00:17:25.362 "data_offset": 0, 00:17:25.362 "data_size": 65536 00:17:25.362 }, 00:17:25.362 { 00:17:25.362 "name": "BaseBdev2", 00:17:25.362 "uuid": "378ecbd9-2da5-40df-91b8-090fdf1041f9", 00:17:25.362 "is_configured": true, 00:17:25.362 "data_offset": 0, 00:17:25.362 "data_size": 65536 00:17:25.362 }, 00:17:25.362 { 00:17:25.362 "name": "BaseBdev3", 00:17:25.362 "uuid": "bd5425b5-5e26-4efd-8c5a-6925863be764", 00:17:25.362 "is_configured": true, 00:17:25.362 "data_offset": 0, 00:17:25.362 "data_size": 65536 00:17:25.362 } 00:17:25.362 ] 00:17:25.362 }' 00:17:25.362 05:14:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.362 05:14:44 -- common/autotest_common.sh@10 -- # set +x 00:17:25.619 05:14:44 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:25.877 [2024-07-26 05:14:44.961851] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.134 05:14:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.391 05:14:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.391 "name": "Existed_Raid", 00:17:26.391 "uuid": "a3031442-d545-4313-878a-1fbdc8ee5dea", 00:17:26.391 "strip_size_kb": 0, 00:17:26.391 "state": "online", 00:17:26.391 "raid_level": "raid1", 00:17:26.391 "superblock": false, 00:17:26.391 "num_base_bdevs": 3, 00:17:26.391 "num_base_bdevs_discovered": 2, 00:17:26.391 "num_base_bdevs_operational": 2, 00:17:26.391 "base_bdevs_list": [ 00:17:26.391 { 00:17:26.391 "name": null, 00:17:26.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.391 "is_configured": false, 00:17:26.391 "data_offset": 0, 00:17:26.391 "data_size": 65536 00:17:26.391 }, 00:17:26.391 { 00:17:26.391 "name": "BaseBdev2", 00:17:26.391 "uuid": "378ecbd9-2da5-40df-91b8-090fdf1041f9", 00:17:26.391 "is_configured": true, 00:17:26.391 "data_offset": 0, 00:17:26.391 "data_size": 65536 00:17:26.391 }, 00:17:26.391 { 00:17:26.391 "name": "BaseBdev3", 00:17:26.391 "uuid": "bd5425b5-5e26-4efd-8c5a-6925863be764", 00:17:26.391 "is_configured": true, 00:17:26.391 "data_offset": 0, 00:17:26.391 "data_size": 65536 00:17:26.391 } 00:17:26.391 ] 00:17:26.391 }' 00:17:26.391 05:14:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.391 05:14:45 -- common/autotest_common.sh@10 -- # set +x 00:17:26.648 05:14:45 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:26.648 05:14:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:26.648 05:14:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.648 05:14:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:26.648 05:14:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:26.648 05:14:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:26.648 05:14:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:26.905 [2024-07-26 05:14:46.005112] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:27.162 05:14:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:27.162 05:14:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.162 05:14:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.162 05:14:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:27.419 05:14:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:27.419 05:14:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.419 05:14:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:27.675 [2024-07-26 05:14:46.583808] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:27.675 [2024-07-26 05:14:46.584068] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.675 [2024-07-26 05:14:46.584249] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.675 [2024-07-26 05:14:46.656396] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.675 [2024-07-26 05:14:46.656634] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:17:27.675 05:14:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:27.675 05:14:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.675 05:14:46 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.675 05:14:46 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:27.932 05:14:46 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:27.932 05:14:46 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:27.932 05:14:46 -- bdev/bdev_raid.sh@287 -- # killprocess 73270 00:17:27.932 05:14:46 -- common/autotest_common.sh@926 -- # '[' -z 73270 ']' 00:17:27.932 05:14:46 -- common/autotest_common.sh@930 -- # kill -0 73270 00:17:27.932 05:14:46 -- common/autotest_common.sh@931 -- # uname 00:17:27.932 05:14:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:27.932 05:14:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73270 00:17:27.932 killing process with pid 73270 00:17:27.932 05:14:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:27.932 05:14:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:27.932 05:14:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73270' 00:17:27.932 05:14:46 -- common/autotest_common.sh@945 -- # kill 73270 00:17:27.932 [2024-07-26 05:14:46.930051] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.932 05:14:46 -- common/autotest_common.sh@950 -- # wait 73270 00:17:27.932 [2024-07-26 05:14:46.930215] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.864 ************************************ 00:17:28.864 END TEST raid_state_function_test 00:17:28.864 ************************************ 00:17:28.864 05:14:47 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:28.864 00:17:28.864 real 0m10.166s 00:17:28.864 user 0m16.806s 00:17:28.864 sys 0m1.554s 00:17:28.864 05:14:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.864 05:14:47 -- common/autotest_common.sh@10 -- # set +x 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:17:29.122 05:14:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:29.122 05:14:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:29.122 05:14:48 -- common/autotest_common.sh@10 -- # set +x 00:17:29.122 ************************************ 00:17:29.122 START TEST raid_state_function_test_sb 00:17:29.122 ************************************ 00:17:29.122 05:14:48 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=73606 00:17:29.122 Process raid pid: 73606 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 73606' 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 73606 /var/tmp/spdk-raid.sock 00:17:29.122 05:14:48 -- common/autotest_common.sh@819 -- # '[' -z 73606 ']' 00:17:29.122 05:14:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:29.122 05:14:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:29.122 05:14:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.122 05:14:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:29.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:29.122 05:14:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.122 05:14:48 -- common/autotest_common.sh@10 -- # set +x 00:17:29.122 [2024-07-26 05:14:48.091831] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:29.122 [2024-07-26 05:14:48.092018] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.380 [2024-07-26 05:14:48.265072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.380 [2024-07-26 05:14:48.432064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.638 [2024-07-26 05:14:48.591784] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.247 05:14:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.247 05:14:49 -- common/autotest_common.sh@852 -- # return 0 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:30.247 [2024-07-26 05:14:49.271308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.247 [2024-07-26 05:14:49.271388] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.247 [2024-07-26 05:14:49.271404] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.247 [2024-07-26 05:14:49.271419] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.247 [2024-07-26 05:14:49.271428] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.247 [2024-07-26 05:14:49.271440] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.247 05:14:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.504 05:14:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:30.504 "name": "Existed_Raid", 00:17:30.504 "uuid": "82909e58-4bba-437b-b63d-ccc3e205f85d", 00:17:30.504 "strip_size_kb": 0, 00:17:30.504 "state": "configuring", 00:17:30.504 "raid_level": "raid1", 00:17:30.504 "superblock": true, 00:17:30.504 "num_base_bdevs": 3, 00:17:30.504 "num_base_bdevs_discovered": 0, 00:17:30.504 "num_base_bdevs_operational": 3, 00:17:30.504 "base_bdevs_list": [ 00:17:30.504 { 00:17:30.504 "name": "BaseBdev1", 00:17:30.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.504 "is_configured": false, 00:17:30.504 "data_offset": 0, 00:17:30.504 "data_size": 0 00:17:30.504 }, 00:17:30.504 { 00:17:30.504 "name": "BaseBdev2", 00:17:30.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.504 "is_configured": false, 00:17:30.504 "data_offset": 0, 00:17:30.504 "data_size": 0 00:17:30.504 }, 00:17:30.504 { 00:17:30.504 "name": "BaseBdev3", 00:17:30.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.504 "is_configured": false, 00:17:30.504 "data_offset": 0, 00:17:30.504 "data_size": 0 00:17:30.504 } 00:17:30.504 ] 00:17:30.504 }' 00:17:30.504 05:14:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:30.504 05:14:49 -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 05:14:49 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:31.020 [2024-07-26 05:14:50.051375] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:31.020 [2024-07-26 05:14:50.051432] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:31.020 05:14:50 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:31.279 [2024-07-26 05:14:50.311502] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:31.279 [2024-07-26 05:14:50.311577] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:31.279 [2024-07-26 05:14:50.311591] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:31.279 [2024-07-26 05:14:50.311608] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:31.279 [2024-07-26 05:14:50.311616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:31.279 [2024-07-26 05:14:50.311628] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:31.279 05:14:50 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:31.541 [2024-07-26 05:14:50.591885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.541 BaseBdev1 00:17:31.541 05:14:50 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:31.541 05:14:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:31.541 05:14:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:31.541 05:14:50 -- common/autotest_common.sh@889 -- # local i 00:17:31.541 05:14:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:31.541 05:14:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:31.541 05:14:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:31.799 05:14:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:32.056 [ 00:17:32.056 { 00:17:32.056 "name": "BaseBdev1", 00:17:32.056 "aliases": [ 00:17:32.056 "1e90d6fd-df28-4c91-9cf6-bd5881d50573" 00:17:32.056 ], 00:17:32.056 "product_name": "Malloc disk", 00:17:32.056 "block_size": 512, 00:17:32.056 "num_blocks": 65536, 00:17:32.056 "uuid": "1e90d6fd-df28-4c91-9cf6-bd5881d50573", 00:17:32.056 "assigned_rate_limits": { 00:17:32.056 "rw_ios_per_sec": 0, 00:17:32.056 "rw_mbytes_per_sec": 0, 00:17:32.056 "r_mbytes_per_sec": 0, 00:17:32.056 "w_mbytes_per_sec": 0 00:17:32.056 }, 00:17:32.057 "claimed": true, 00:17:32.057 "claim_type": "exclusive_write", 00:17:32.057 "zoned": false, 00:17:32.057 "supported_io_types": { 00:17:32.057 "read": true, 00:17:32.057 "write": true, 00:17:32.057 "unmap": true, 00:17:32.057 "write_zeroes": true, 00:17:32.057 "flush": true, 00:17:32.057 "reset": true, 00:17:32.057 "compare": false, 00:17:32.057 "compare_and_write": false, 00:17:32.057 "abort": true, 00:17:32.057 "nvme_admin": false, 00:17:32.057 "nvme_io": false 00:17:32.057 }, 00:17:32.057 "memory_domains": [ 00:17:32.057 { 00:17:32.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.057 "dma_device_type": 2 00:17:32.057 } 00:17:32.057 ], 00:17:32.057 "driver_specific": {} 00:17:32.057 } 00:17:32.057 ] 00:17:32.057 05:14:51 -- common/autotest_common.sh@895 -- # return 0 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.057 05:14:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.314 05:14:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.314 "name": "Existed_Raid", 00:17:32.314 "uuid": "2270dccd-b573-4738-a2b3-df387a22dca6", 00:17:32.314 "strip_size_kb": 0, 00:17:32.314 "state": "configuring", 00:17:32.314 "raid_level": "raid1", 00:17:32.314 "superblock": true, 00:17:32.314 "num_base_bdevs": 3, 00:17:32.314 "num_base_bdevs_discovered": 1, 00:17:32.314 "num_base_bdevs_operational": 3, 00:17:32.314 "base_bdevs_list": [ 00:17:32.314 { 00:17:32.314 "name": "BaseBdev1", 00:17:32.314 "uuid": "1e90d6fd-df28-4c91-9cf6-bd5881d50573", 00:17:32.314 "is_configured": true, 00:17:32.314 "data_offset": 2048, 00:17:32.314 "data_size": 63488 00:17:32.314 }, 00:17:32.314 { 00:17:32.314 "name": "BaseBdev2", 00:17:32.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.314 "is_configured": false, 00:17:32.314 "data_offset": 0, 00:17:32.314 "data_size": 0 00:17:32.314 }, 00:17:32.314 { 00:17:32.314 "name": "BaseBdev3", 00:17:32.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.314 "is_configured": false, 00:17:32.314 "data_offset": 0, 00:17:32.314 "data_size": 0 00:17:32.314 } 00:17:32.314 ] 00:17:32.314 }' 00:17:32.314 05:14:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.314 05:14:51 -- common/autotest_common.sh@10 -- # set +x 00:17:32.572 05:14:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:32.830 [2024-07-26 05:14:51.776285] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:32.830 [2024-07-26 05:14:51.776342] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:32.830 05:14:51 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:32.830 05:14:51 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:33.088 05:14:52 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:33.346 BaseBdev1 00:17:33.346 05:14:52 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:33.346 05:14:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:33.346 05:14:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:33.346 05:14:52 -- common/autotest_common.sh@889 -- # local i 00:17:33.346 05:14:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:33.346 05:14:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:33.346 05:14:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.604 05:14:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:33.862 [ 00:17:33.862 { 00:17:33.862 "name": "BaseBdev1", 00:17:33.862 "aliases": [ 00:17:33.862 "d3117c72-298e-4ae3-8ad8-ce9eea39fff7" 00:17:33.862 ], 00:17:33.862 "product_name": "Malloc disk", 00:17:33.862 "block_size": 512, 00:17:33.862 "num_blocks": 65536, 00:17:33.862 "uuid": "d3117c72-298e-4ae3-8ad8-ce9eea39fff7", 00:17:33.862 "assigned_rate_limits": { 00:17:33.862 "rw_ios_per_sec": 0, 00:17:33.862 "rw_mbytes_per_sec": 0, 00:17:33.862 "r_mbytes_per_sec": 0, 00:17:33.862 "w_mbytes_per_sec": 0 00:17:33.862 }, 00:17:33.862 "claimed": false, 00:17:33.862 "zoned": false, 00:17:33.862 "supported_io_types": { 00:17:33.862 "read": true, 00:17:33.862 "write": true, 00:17:33.862 "unmap": true, 00:17:33.862 "write_zeroes": true, 00:17:33.862 "flush": true, 00:17:33.862 "reset": true, 00:17:33.862 "compare": false, 00:17:33.862 "compare_and_write": false, 00:17:33.862 "abort": true, 00:17:33.862 "nvme_admin": false, 00:17:33.862 "nvme_io": false 00:17:33.862 }, 00:17:33.862 "memory_domains": [ 00:17:33.862 { 00:17:33.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.862 "dma_device_type": 2 00:17:33.862 } 00:17:33.862 ], 00:17:33.862 "driver_specific": {} 00:17:33.862 } 00:17:33.862 ] 00:17:33.862 05:14:52 -- common/autotest_common.sh@895 -- # return 0 00:17:33.862 05:14:52 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:34.119 [2024-07-26 05:14:52.978623] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.119 [2024-07-26 05:14:52.980668] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.119 [2024-07-26 05:14:52.980736] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.119 [2024-07-26 05:14:52.980752] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:34.119 [2024-07-26 05:14:52.980767] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:34.119 05:14:52 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:34.119 05:14:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:34.119 05:14:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:34.119 05:14:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:34.119 05:14:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:34.119 05:14:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:34.119 05:14:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:34.119 05:14:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:34.120 05:14:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:34.120 05:14:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:34.120 05:14:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:34.120 05:14:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:34.120 05:14:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.120 05:14:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.377 05:14:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:34.377 "name": "Existed_Raid", 00:17:34.377 "uuid": "a0d6dee4-49b2-4e8b-baba-6523c01beb25", 00:17:34.377 "strip_size_kb": 0, 00:17:34.377 "state": "configuring", 00:17:34.377 "raid_level": "raid1", 00:17:34.377 "superblock": true, 00:17:34.377 "num_base_bdevs": 3, 00:17:34.377 "num_base_bdevs_discovered": 1, 00:17:34.377 "num_base_bdevs_operational": 3, 00:17:34.377 "base_bdevs_list": [ 00:17:34.377 { 00:17:34.377 "name": "BaseBdev1", 00:17:34.377 "uuid": "d3117c72-298e-4ae3-8ad8-ce9eea39fff7", 00:17:34.377 "is_configured": true, 00:17:34.377 "data_offset": 2048, 00:17:34.377 "data_size": 63488 00:17:34.377 }, 00:17:34.377 { 00:17:34.377 "name": "BaseBdev2", 00:17:34.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.377 "is_configured": false, 00:17:34.377 "data_offset": 0, 00:17:34.377 "data_size": 0 00:17:34.377 }, 00:17:34.377 { 00:17:34.377 "name": "BaseBdev3", 00:17:34.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.377 "is_configured": false, 00:17:34.377 "data_offset": 0, 00:17:34.377 "data_size": 0 00:17:34.377 } 00:17:34.377 ] 00:17:34.377 }' 00:17:34.377 05:14:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:34.377 05:14:53 -- common/autotest_common.sh@10 -- # set +x 00:17:34.634 05:14:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:34.891 [2024-07-26 05:14:53.828522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.891 BaseBdev2 00:17:34.891 05:14:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:34.891 05:14:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:34.891 05:14:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:34.891 05:14:53 -- common/autotest_common.sh@889 -- # local i 00:17:34.891 05:14:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:34.891 05:14:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:34.891 05:14:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:35.147 05:14:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:35.405 [ 00:17:35.405 { 00:17:35.405 "name": "BaseBdev2", 00:17:35.405 "aliases": [ 00:17:35.405 "a0680a81-d238-44c7-8074-e44f9f0f3167" 00:17:35.405 ], 00:17:35.405 "product_name": "Malloc disk", 00:17:35.405 "block_size": 512, 00:17:35.405 "num_blocks": 65536, 00:17:35.405 "uuid": "a0680a81-d238-44c7-8074-e44f9f0f3167", 00:17:35.405 "assigned_rate_limits": { 00:17:35.405 "rw_ios_per_sec": 0, 00:17:35.405 "rw_mbytes_per_sec": 0, 00:17:35.405 "r_mbytes_per_sec": 0, 00:17:35.405 "w_mbytes_per_sec": 0 00:17:35.405 }, 00:17:35.405 "claimed": true, 00:17:35.405 "claim_type": "exclusive_write", 00:17:35.405 "zoned": false, 00:17:35.405 "supported_io_types": { 00:17:35.405 "read": true, 00:17:35.405 "write": true, 00:17:35.405 "unmap": true, 00:17:35.405 "write_zeroes": true, 00:17:35.405 "flush": true, 00:17:35.405 "reset": true, 00:17:35.405 "compare": false, 00:17:35.405 "compare_and_write": false, 00:17:35.405 "abort": true, 00:17:35.405 "nvme_admin": false, 00:17:35.405 "nvme_io": false 00:17:35.405 }, 00:17:35.405 "memory_domains": [ 00:17:35.405 { 00:17:35.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.405 "dma_device_type": 2 00:17:35.405 } 00:17:35.405 ], 00:17:35.405 "driver_specific": {} 00:17:35.405 } 00:17:35.405 ] 00:17:35.405 05:14:54 -- common/autotest_common.sh@895 -- # return 0 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.405 05:14:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.663 05:14:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.663 "name": "Existed_Raid", 00:17:35.663 "uuid": "a0d6dee4-49b2-4e8b-baba-6523c01beb25", 00:17:35.663 "strip_size_kb": 0, 00:17:35.663 "state": "configuring", 00:17:35.663 "raid_level": "raid1", 00:17:35.663 "superblock": true, 00:17:35.663 "num_base_bdevs": 3, 00:17:35.663 "num_base_bdevs_discovered": 2, 00:17:35.663 "num_base_bdevs_operational": 3, 00:17:35.663 "base_bdevs_list": [ 00:17:35.663 { 00:17:35.663 "name": "BaseBdev1", 00:17:35.663 "uuid": "d3117c72-298e-4ae3-8ad8-ce9eea39fff7", 00:17:35.663 "is_configured": true, 00:17:35.663 "data_offset": 2048, 00:17:35.663 "data_size": 63488 00:17:35.663 }, 00:17:35.663 { 00:17:35.663 "name": "BaseBdev2", 00:17:35.663 "uuid": "a0680a81-d238-44c7-8074-e44f9f0f3167", 00:17:35.663 "is_configured": true, 00:17:35.663 "data_offset": 2048, 00:17:35.663 "data_size": 63488 00:17:35.663 }, 00:17:35.663 { 00:17:35.663 "name": "BaseBdev3", 00:17:35.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.663 "is_configured": false, 00:17:35.663 "data_offset": 0, 00:17:35.663 "data_size": 0 00:17:35.663 } 00:17:35.663 ] 00:17:35.663 }' 00:17:35.663 05:14:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.663 05:14:54 -- common/autotest_common.sh@10 -- # set +x 00:17:35.920 05:14:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:36.178 [2024-07-26 05:14:55.115180] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.178 [2024-07-26 05:14:55.115635] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:17:36.178 [2024-07-26 05:14:55.115811] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:36.178 [2024-07-26 05:14:55.116092] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:36.178 [2024-07-26 05:14:55.116607] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:17:36.178 BaseBdev3 00:17:36.178 [2024-07-26 05:14:55.116737] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:17:36.178 [2024-07-26 05:14:55.116972] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.178 05:14:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:36.178 05:14:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:36.178 05:14:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:36.178 05:14:55 -- common/autotest_common.sh@889 -- # local i 00:17:36.178 05:14:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:36.178 05:14:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:36.178 05:14:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:36.435 05:14:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:36.693 [ 00:17:36.693 { 00:17:36.693 "name": "BaseBdev3", 00:17:36.693 "aliases": [ 00:17:36.693 "4bc6171f-431b-477b-8a24-63099cd2cfb2" 00:17:36.693 ], 00:17:36.693 "product_name": "Malloc disk", 00:17:36.693 "block_size": 512, 00:17:36.693 "num_blocks": 65536, 00:17:36.693 "uuid": "4bc6171f-431b-477b-8a24-63099cd2cfb2", 00:17:36.693 "assigned_rate_limits": { 00:17:36.693 "rw_ios_per_sec": 0, 00:17:36.693 "rw_mbytes_per_sec": 0, 00:17:36.693 "r_mbytes_per_sec": 0, 00:17:36.693 "w_mbytes_per_sec": 0 00:17:36.693 }, 00:17:36.693 "claimed": true, 00:17:36.693 "claim_type": "exclusive_write", 00:17:36.693 "zoned": false, 00:17:36.693 "supported_io_types": { 00:17:36.693 "read": true, 00:17:36.693 "write": true, 00:17:36.693 "unmap": true, 00:17:36.693 "write_zeroes": true, 00:17:36.693 "flush": true, 00:17:36.693 "reset": true, 00:17:36.693 "compare": false, 00:17:36.693 "compare_and_write": false, 00:17:36.693 "abort": true, 00:17:36.693 "nvme_admin": false, 00:17:36.693 "nvme_io": false 00:17:36.693 }, 00:17:36.693 "memory_domains": [ 00:17:36.693 { 00:17:36.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.693 "dma_device_type": 2 00:17:36.693 } 00:17:36.693 ], 00:17:36.693 "driver_specific": {} 00:17:36.694 } 00:17:36.694 ] 00:17:36.694 05:14:55 -- common/autotest_common.sh@895 -- # return 0 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.694 "name": "Existed_Raid", 00:17:36.694 "uuid": "a0d6dee4-49b2-4e8b-baba-6523c01beb25", 00:17:36.694 "strip_size_kb": 0, 00:17:36.694 "state": "online", 00:17:36.694 "raid_level": "raid1", 00:17:36.694 "superblock": true, 00:17:36.694 "num_base_bdevs": 3, 00:17:36.694 "num_base_bdevs_discovered": 3, 00:17:36.694 "num_base_bdevs_operational": 3, 00:17:36.694 "base_bdevs_list": [ 00:17:36.694 { 00:17:36.694 "name": "BaseBdev1", 00:17:36.694 "uuid": "d3117c72-298e-4ae3-8ad8-ce9eea39fff7", 00:17:36.694 "is_configured": true, 00:17:36.694 "data_offset": 2048, 00:17:36.694 "data_size": 63488 00:17:36.694 }, 00:17:36.694 { 00:17:36.694 "name": "BaseBdev2", 00:17:36.694 "uuid": "a0680a81-d238-44c7-8074-e44f9f0f3167", 00:17:36.694 "is_configured": true, 00:17:36.694 "data_offset": 2048, 00:17:36.694 "data_size": 63488 00:17:36.694 }, 00:17:36.694 { 00:17:36.694 "name": "BaseBdev3", 00:17:36.694 "uuid": "4bc6171f-431b-477b-8a24-63099cd2cfb2", 00:17:36.694 "is_configured": true, 00:17:36.694 "data_offset": 2048, 00:17:36.694 "data_size": 63488 00:17:36.694 } 00:17:36.694 ] 00:17:36.694 }' 00:17:36.694 05:14:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.694 05:14:55 -- common/autotest_common.sh@10 -- # set +x 00:17:37.259 05:14:56 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:37.259 [2024-07-26 05:14:56.347660] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.517 05:14:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.774 05:14:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.774 "name": "Existed_Raid", 00:17:37.774 "uuid": "a0d6dee4-49b2-4e8b-baba-6523c01beb25", 00:17:37.774 "strip_size_kb": 0, 00:17:37.774 "state": "online", 00:17:37.774 "raid_level": "raid1", 00:17:37.774 "superblock": true, 00:17:37.774 "num_base_bdevs": 3, 00:17:37.774 "num_base_bdevs_discovered": 2, 00:17:37.774 "num_base_bdevs_operational": 2, 00:17:37.775 "base_bdevs_list": [ 00:17:37.775 { 00:17:37.775 "name": null, 00:17:37.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.775 "is_configured": false, 00:17:37.775 "data_offset": 2048, 00:17:37.775 "data_size": 63488 00:17:37.775 }, 00:17:37.775 { 00:17:37.775 "name": "BaseBdev2", 00:17:37.775 "uuid": "a0680a81-d238-44c7-8074-e44f9f0f3167", 00:17:37.775 "is_configured": true, 00:17:37.775 "data_offset": 2048, 00:17:37.775 "data_size": 63488 00:17:37.775 }, 00:17:37.775 { 00:17:37.775 "name": "BaseBdev3", 00:17:37.775 "uuid": "4bc6171f-431b-477b-8a24-63099cd2cfb2", 00:17:37.775 "is_configured": true, 00:17:37.775 "data_offset": 2048, 00:17:37.775 "data_size": 63488 00:17:37.775 } 00:17:37.775 ] 00:17:37.775 }' 00:17:37.775 05:14:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.775 05:14:56 -- common/autotest_common.sh@10 -- # set +x 00:17:38.032 05:14:56 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:38.032 05:14:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:38.032 05:14:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.032 05:14:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:38.032 05:14:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:38.032 05:14:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.032 05:14:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:38.290 [2024-07-26 05:14:57.352254] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:38.558 05:14:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:38.558 05:14:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:38.558 05:14:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:38.558 05:14:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.558 05:14:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:38.558 05:14:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.558 05:14:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:38.833 [2024-07-26 05:14:57.884867] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:38.833 [2024-07-26 05:14:57.885141] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.833 [2024-07-26 05:14:57.885222] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.092 [2024-07-26 05:14:57.955874] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.092 [2024-07-26 05:14:57.955911] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:17:39.092 05:14:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:39.092 05:14:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:39.092 05:14:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.092 05:14:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:39.350 05:14:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:39.350 05:14:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:39.350 05:14:58 -- bdev/bdev_raid.sh@287 -- # killprocess 73606 00:17:39.350 05:14:58 -- common/autotest_common.sh@926 -- # '[' -z 73606 ']' 00:17:39.350 05:14:58 -- common/autotest_common.sh@930 -- # kill -0 73606 00:17:39.350 05:14:58 -- common/autotest_common.sh@931 -- # uname 00:17:39.350 05:14:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.350 05:14:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73606 00:17:39.350 killing process with pid 73606 00:17:39.350 05:14:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:39.350 05:14:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:39.350 05:14:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73606' 00:17:39.350 05:14:58 -- common/autotest_common.sh@945 -- # kill 73606 00:17:39.350 [2024-07-26 05:14:58.252808] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.350 05:14:58 -- common/autotest_common.sh@950 -- # wait 73606 00:17:39.350 [2024-07-26 05:14:58.252920] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.285 ************************************ 00:17:40.285 END TEST raid_state_function_test_sb 00:17:40.285 ************************************ 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:40.285 00:17:40.285 real 0m11.270s 00:17:40.285 user 0m18.796s 00:17:40.285 sys 0m1.634s 00:17:40.285 05:14:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.285 05:14:59 -- common/autotest_common.sh@10 -- # set +x 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:40.285 05:14:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:40.285 05:14:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.285 05:14:59 -- common/autotest_common.sh@10 -- # set +x 00:17:40.285 ************************************ 00:17:40.285 START TEST raid_superblock_test 00:17:40.285 ************************************ 00:17:40.285 05:14:59 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=73960 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:40.285 05:14:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 73960 /var/tmp/spdk-raid.sock 00:17:40.285 05:14:59 -- common/autotest_common.sh@819 -- # '[' -z 73960 ']' 00:17:40.285 05:14:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:40.285 05:14:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:40.285 05:14:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:40.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:40.285 05:14:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:40.285 05:14:59 -- common/autotest_common.sh@10 -- # set +x 00:17:40.544 [2024-07-26 05:14:59.408111] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:40.544 [2024-07-26 05:14:59.408254] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73960 ] 00:17:40.544 [2024-07-26 05:14:59.563964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.802 [2024-07-26 05:14:59.742876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.802 [2024-07-26 05:14:59.911327] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.367 05:15:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:41.367 05:15:00 -- common/autotest_common.sh@852 -- # return 0 00:17:41.367 05:15:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:41.367 05:15:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:41.367 05:15:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:41.367 05:15:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:41.367 05:15:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:41.367 05:15:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:41.367 05:15:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:41.367 05:15:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:41.367 05:15:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:41.625 malloc1 00:17:41.625 05:15:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:41.883 [2024-07-26 05:15:00.850112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:41.883 [2024-07-26 05:15:00.850194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.883 [2024-07-26 05:15:00.850239] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:17:41.883 [2024-07-26 05:15:00.850255] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.883 [2024-07-26 05:15:00.852770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.883 [2024-07-26 05:15:00.852830] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:41.883 pt1 00:17:41.883 05:15:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:41.883 05:15:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:41.883 05:15:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:41.883 05:15:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:41.883 05:15:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:41.883 05:15:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:41.883 05:15:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:41.883 05:15:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:41.883 05:15:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:42.140 malloc2 00:17:42.140 05:15:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.398 [2024-07-26 05:15:01.305057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.398 [2024-07-26 05:15:01.305171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.398 [2024-07-26 05:15:01.305204] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:17:42.398 [2024-07-26 05:15:01.305218] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.398 [2024-07-26 05:15:01.307762] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.398 [2024-07-26 05:15:01.307821] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.398 pt2 00:17:42.398 05:15:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:42.398 05:15:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:42.398 05:15:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:42.398 05:15:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:42.398 05:15:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:42.398 05:15:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:42.398 05:15:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:42.398 05:15:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:42.398 05:15:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:42.669 malloc3 00:17:42.669 05:15:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:42.929 [2024-07-26 05:15:01.812199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:42.929 [2024-07-26 05:15:01.812296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.929 [2024-07-26 05:15:01.812333] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:17:42.929 [2024-07-26 05:15:01.812348] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.929 [2024-07-26 05:15:01.814855] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.929 [2024-07-26 05:15:01.814915] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:42.929 pt3 00:17:42.929 05:15:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:42.929 05:15:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:42.929 05:15:01 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:42.929 [2024-07-26 05:15:02.036311] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:42.929 [2024-07-26 05:15:02.038656] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.929 [2024-07-26 05:15:02.038774] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:42.929 [2024-07-26 05:15:02.039064] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:17:42.929 [2024-07-26 05:15:02.039098] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:43.186 [2024-07-26 05:15:02.039237] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:17:43.186 [2024-07-26 05:15:02.039698] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:17:43.186 [2024-07-26 05:15:02.039728] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:17:43.186 [2024-07-26 05:15:02.039913] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.186 05:15:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.443 05:15:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.443 "name": "raid_bdev1", 00:17:43.443 "uuid": "0df1b751-61a3-4fe9-85fe-7f8c4c296f3e", 00:17:43.443 "strip_size_kb": 0, 00:17:43.443 "state": "online", 00:17:43.443 "raid_level": "raid1", 00:17:43.443 "superblock": true, 00:17:43.443 "num_base_bdevs": 3, 00:17:43.443 "num_base_bdevs_discovered": 3, 00:17:43.443 "num_base_bdevs_operational": 3, 00:17:43.443 "base_bdevs_list": [ 00:17:43.443 { 00:17:43.443 "name": "pt1", 00:17:43.443 "uuid": "269a705e-928e-575d-8d59-382167a2736a", 00:17:43.443 "is_configured": true, 00:17:43.443 "data_offset": 2048, 00:17:43.443 "data_size": 63488 00:17:43.443 }, 00:17:43.443 { 00:17:43.443 "name": "pt2", 00:17:43.443 "uuid": "31612332-e8c7-5180-81ac-08a72ad98230", 00:17:43.443 "is_configured": true, 00:17:43.443 "data_offset": 2048, 00:17:43.443 "data_size": 63488 00:17:43.443 }, 00:17:43.443 { 00:17:43.443 "name": "pt3", 00:17:43.443 "uuid": "2493eac5-3d3a-5f2c-b8e3-8b50f6b58952", 00:17:43.443 "is_configured": true, 00:17:43.443 "data_offset": 2048, 00:17:43.443 "data_size": 63488 00:17:43.443 } 00:17:43.443 ] 00:17:43.443 }' 00:17:43.443 05:15:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.443 05:15:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.701 05:15:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:43.701 05:15:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:43.959 [2024-07-26 05:15:02.900757] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.959 05:15:02 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0df1b751-61a3-4fe9-85fe-7f8c4c296f3e 00:17:43.959 05:15:02 -- bdev/bdev_raid.sh@380 -- # '[' -z 0df1b751-61a3-4fe9-85fe-7f8c4c296f3e ']' 00:17:43.959 05:15:02 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:44.217 [2024-07-26 05:15:03.168605] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.217 [2024-07-26 05:15:03.168643] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.217 [2024-07-26 05:15:03.168749] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.217 [2024-07-26 05:15:03.168867] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.217 [2024-07-26 05:15:03.168889] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:17:44.217 05:15:03 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.217 05:15:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:44.473 05:15:03 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:44.473 05:15:03 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:44.473 05:15:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:44.473 05:15:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:44.730 05:15:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:44.730 05:15:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:44.988 05:15:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:44.988 05:15:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:45.246 05:15:04 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:45.247 05:15:04 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:45.504 05:15:04 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:45.504 05:15:04 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:45.504 05:15:04 -- common/autotest_common.sh@640 -- # local es=0 00:17:45.504 05:15:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:45.504 05:15:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.504 05:15:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:45.504 05:15:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.504 05:15:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:45.504 05:15:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.504 05:15:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:45.504 05:15:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.504 05:15:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:45.504 05:15:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:45.504 [2024-07-26 05:15:04.580973] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:45.504 [2024-07-26 05:15:04.583135] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:45.504 [2024-07-26 05:15:04.583208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:45.504 [2024-07-26 05:15:04.583278] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:45.504 [2024-07-26 05:15:04.583349] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:45.504 [2024-07-26 05:15:04.583387] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:45.504 [2024-07-26 05:15:04.583411] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.504 [2024-07-26 05:15:04.583428] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:17:45.504 request: 00:17:45.504 { 00:17:45.504 "name": "raid_bdev1", 00:17:45.504 "raid_level": "raid1", 00:17:45.504 "base_bdevs": [ 00:17:45.504 "malloc1", 00:17:45.504 "malloc2", 00:17:45.504 "malloc3" 00:17:45.504 ], 00:17:45.504 "superblock": false, 00:17:45.505 "method": "bdev_raid_create", 00:17:45.505 "req_id": 1 00:17:45.505 } 00:17:45.505 Got JSON-RPC error response 00:17:45.505 response: 00:17:45.505 { 00:17:45.505 "code": -17, 00:17:45.505 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:45.505 } 00:17:45.505 05:15:04 -- common/autotest_common.sh@643 -- # es=1 00:17:45.505 05:15:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:45.505 05:15:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:45.505 05:15:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:45.505 05:15:04 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.505 05:15:04 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:45.763 05:15:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:45.763 05:15:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:45.763 05:15:04 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:46.021 [2024-07-26 05:15:05.101098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:46.021 [2024-07-26 05:15:05.101253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.021 [2024-07-26 05:15:05.101311] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:17:46.021 [2024-07-26 05:15:05.101326] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.021 [2024-07-26 05:15:05.103781] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.021 [2024-07-26 05:15:05.103856] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:46.021 [2024-07-26 05:15:05.104007] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:46.021 [2024-07-26 05:15:05.104090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:46.021 pt1 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.021 05:15:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.280 05:15:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.280 "name": "raid_bdev1", 00:17:46.280 "uuid": "0df1b751-61a3-4fe9-85fe-7f8c4c296f3e", 00:17:46.280 "strip_size_kb": 0, 00:17:46.280 "state": "configuring", 00:17:46.280 "raid_level": "raid1", 00:17:46.280 "superblock": true, 00:17:46.280 "num_base_bdevs": 3, 00:17:46.280 "num_base_bdevs_discovered": 1, 00:17:46.280 "num_base_bdevs_operational": 3, 00:17:46.280 "base_bdevs_list": [ 00:17:46.280 { 00:17:46.280 "name": "pt1", 00:17:46.280 "uuid": "269a705e-928e-575d-8d59-382167a2736a", 00:17:46.280 "is_configured": true, 00:17:46.280 "data_offset": 2048, 00:17:46.280 "data_size": 63488 00:17:46.280 }, 00:17:46.280 { 00:17:46.280 "name": null, 00:17:46.280 "uuid": "31612332-e8c7-5180-81ac-08a72ad98230", 00:17:46.280 "is_configured": false, 00:17:46.280 "data_offset": 2048, 00:17:46.280 "data_size": 63488 00:17:46.280 }, 00:17:46.280 { 00:17:46.280 "name": null, 00:17:46.280 "uuid": "2493eac5-3d3a-5f2c-b8e3-8b50f6b58952", 00:17:46.280 "is_configured": false, 00:17:46.280 "data_offset": 2048, 00:17:46.280 "data_size": 63488 00:17:46.280 } 00:17:46.280 ] 00:17:46.280 }' 00:17:46.280 05:15:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.280 05:15:05 -- common/autotest_common.sh@10 -- # set +x 00:17:46.551 05:15:05 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:46.551 05:15:05 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:46.822 [2024-07-26 05:15:05.845295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:46.822 [2024-07-26 05:15:05.845400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.822 [2024-07-26 05:15:05.845431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:17:46.822 [2024-07-26 05:15:05.845446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.822 [2024-07-26 05:15:05.845930] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.822 [2024-07-26 05:15:05.845970] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:46.822 [2024-07-26 05:15:05.846085] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:46.822 [2024-07-26 05:15:05.846155] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.822 pt2 00:17:46.822 05:15:05 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:47.080 [2024-07-26 05:15:06.073390] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:47.080 05:15:06 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:47.080 05:15:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:47.080 05:15:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:47.080 05:15:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:47.080 05:15:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:47.080 05:15:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:47.080 05:15:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.080 05:15:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.080 05:15:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.080 05:15:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.081 05:15:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.081 05:15:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.338 05:15:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.338 "name": "raid_bdev1", 00:17:47.338 "uuid": "0df1b751-61a3-4fe9-85fe-7f8c4c296f3e", 00:17:47.338 "strip_size_kb": 0, 00:17:47.338 "state": "configuring", 00:17:47.338 "raid_level": "raid1", 00:17:47.338 "superblock": true, 00:17:47.338 "num_base_bdevs": 3, 00:17:47.338 "num_base_bdevs_discovered": 1, 00:17:47.338 "num_base_bdevs_operational": 3, 00:17:47.338 "base_bdevs_list": [ 00:17:47.338 { 00:17:47.338 "name": "pt1", 00:17:47.338 "uuid": "269a705e-928e-575d-8d59-382167a2736a", 00:17:47.338 "is_configured": true, 00:17:47.338 "data_offset": 2048, 00:17:47.338 "data_size": 63488 00:17:47.338 }, 00:17:47.338 { 00:17:47.338 "name": null, 00:17:47.338 "uuid": "31612332-e8c7-5180-81ac-08a72ad98230", 00:17:47.338 "is_configured": false, 00:17:47.338 "data_offset": 2048, 00:17:47.338 "data_size": 63488 00:17:47.338 }, 00:17:47.338 { 00:17:47.338 "name": null, 00:17:47.338 "uuid": "2493eac5-3d3a-5f2c-b8e3-8b50f6b58952", 00:17:47.338 "is_configured": false, 00:17:47.338 "data_offset": 2048, 00:17:47.338 "data_size": 63488 00:17:47.338 } 00:17:47.338 ] 00:17:47.338 }' 00:17:47.338 05:15:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.338 05:15:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.596 05:15:06 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:47.596 05:15:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:47.596 05:15:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.853 [2024-07-26 05:15:06.877650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.854 [2024-07-26 05:15:06.877759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.854 [2024-07-26 05:15:06.877791] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:17:47.854 [2024-07-26 05:15:06.877804] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.854 [2024-07-26 05:15:06.878377] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.854 [2024-07-26 05:15:06.878413] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.854 [2024-07-26 05:15:06.878551] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:47.854 [2024-07-26 05:15:06.878579] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.854 pt2 00:17:47.854 05:15:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:47.854 05:15:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:47.854 05:15:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:48.112 [2024-07-26 05:15:07.093714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:48.112 [2024-07-26 05:15:07.093810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.112 [2024-07-26 05:15:07.093841] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:17:48.112 [2024-07-26 05:15:07.093854] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.112 [2024-07-26 05:15:07.094421] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.112 [2024-07-26 05:15:07.094474] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:48.112 [2024-07-26 05:15:07.094604] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:48.112 [2024-07-26 05:15:07.094679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:48.112 [2024-07-26 05:15:07.094853] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:17:48.112 [2024-07-26 05:15:07.094879] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:48.112 [2024-07-26 05:15:07.094989] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:48.112 [2024-07-26 05:15:07.095386] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:17:48.112 [2024-07-26 05:15:07.095431] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:17:48.112 [2024-07-26 05:15:07.095604] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.112 pt3 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.112 05:15:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.370 05:15:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.370 "name": "raid_bdev1", 00:17:48.370 "uuid": "0df1b751-61a3-4fe9-85fe-7f8c4c296f3e", 00:17:48.370 "strip_size_kb": 0, 00:17:48.370 "state": "online", 00:17:48.370 "raid_level": "raid1", 00:17:48.370 "superblock": true, 00:17:48.370 "num_base_bdevs": 3, 00:17:48.370 "num_base_bdevs_discovered": 3, 00:17:48.370 "num_base_bdevs_operational": 3, 00:17:48.370 "base_bdevs_list": [ 00:17:48.370 { 00:17:48.370 "name": "pt1", 00:17:48.370 "uuid": "269a705e-928e-575d-8d59-382167a2736a", 00:17:48.370 "is_configured": true, 00:17:48.370 "data_offset": 2048, 00:17:48.370 "data_size": 63488 00:17:48.370 }, 00:17:48.370 { 00:17:48.370 "name": "pt2", 00:17:48.370 "uuid": "31612332-e8c7-5180-81ac-08a72ad98230", 00:17:48.370 "is_configured": true, 00:17:48.370 "data_offset": 2048, 00:17:48.370 "data_size": 63488 00:17:48.370 }, 00:17:48.370 { 00:17:48.370 "name": "pt3", 00:17:48.370 "uuid": "2493eac5-3d3a-5f2c-b8e3-8b50f6b58952", 00:17:48.370 "is_configured": true, 00:17:48.370 "data_offset": 2048, 00:17:48.370 "data_size": 63488 00:17:48.370 } 00:17:48.370 ] 00:17:48.370 }' 00:17:48.370 05:15:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.370 05:15:07 -- common/autotest_common.sh@10 -- # set +x 00:17:48.628 05:15:07 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:48.628 05:15:07 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:48.886 [2024-07-26 05:15:07.914173] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.886 05:15:07 -- bdev/bdev_raid.sh@430 -- # '[' 0df1b751-61a3-4fe9-85fe-7f8c4c296f3e '!=' 0df1b751-61a3-4fe9-85fe-7f8c4c296f3e ']' 00:17:48.886 05:15:07 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:48.886 05:15:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:48.886 05:15:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:48.886 05:15:07 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:49.145 [2024-07-26 05:15:08.134008] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.145 05:15:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.403 05:15:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:49.403 "name": "raid_bdev1", 00:17:49.403 "uuid": "0df1b751-61a3-4fe9-85fe-7f8c4c296f3e", 00:17:49.403 "strip_size_kb": 0, 00:17:49.403 "state": "online", 00:17:49.403 "raid_level": "raid1", 00:17:49.403 "superblock": true, 00:17:49.403 "num_base_bdevs": 3, 00:17:49.403 "num_base_bdevs_discovered": 2, 00:17:49.403 "num_base_bdevs_operational": 2, 00:17:49.403 "base_bdevs_list": [ 00:17:49.403 { 00:17:49.403 "name": null, 00:17:49.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.403 "is_configured": false, 00:17:49.403 "data_offset": 2048, 00:17:49.403 "data_size": 63488 00:17:49.403 }, 00:17:49.403 { 00:17:49.403 "name": "pt2", 00:17:49.403 "uuid": "31612332-e8c7-5180-81ac-08a72ad98230", 00:17:49.403 "is_configured": true, 00:17:49.403 "data_offset": 2048, 00:17:49.403 "data_size": 63488 00:17:49.403 }, 00:17:49.403 { 00:17:49.403 "name": "pt3", 00:17:49.403 "uuid": "2493eac5-3d3a-5f2c-b8e3-8b50f6b58952", 00:17:49.403 "is_configured": true, 00:17:49.403 "data_offset": 2048, 00:17:49.403 "data_size": 63488 00:17:49.403 } 00:17:49.403 ] 00:17:49.403 }' 00:17:49.403 05:15:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:49.403 05:15:08 -- common/autotest_common.sh@10 -- # set +x 00:17:49.661 05:15:08 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:49.918 [2024-07-26 05:15:08.910184] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.918 [2024-07-26 05:15:08.910221] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.918 [2024-07-26 05:15:08.910305] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.918 [2024-07-26 05:15:08.910406] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.918 [2024-07-26 05:15:08.910426] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:17:49.918 05:15:08 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.918 05:15:08 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:50.176 05:15:09 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:50.176 05:15:09 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:50.176 05:15:09 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:50.176 05:15:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:50.176 05:15:09 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:50.435 05:15:09 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:50.435 05:15:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:50.435 05:15:09 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:50.692 05:15:09 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:50.692 05:15:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:50.692 05:15:09 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:50.692 05:15:09 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:50.692 05:15:09 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.692 [2024-07-26 05:15:09.782428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.692 [2024-07-26 05:15:09.782576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.692 [2024-07-26 05:15:09.782604] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:17:50.692 [2024-07-26 05:15:09.782622] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.692 [2024-07-26 05:15:09.785059] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.692 [2024-07-26 05:15:09.785118] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.692 [2024-07-26 05:15:09.785215] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:50.693 [2024-07-26 05:15:09.785293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.693 pt2 00:17:50.693 05:15:09 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:50.693 05:15:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:50.693 05:15:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:50.693 05:15:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:50.693 05:15:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:50.693 05:15:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:50.693 05:15:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:50.693 05:15:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:50.693 05:15:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:50.693 05:15:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:50.951 05:15:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.951 05:15:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.951 05:15:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:50.951 "name": "raid_bdev1", 00:17:50.951 "uuid": "0df1b751-61a3-4fe9-85fe-7f8c4c296f3e", 00:17:50.951 "strip_size_kb": 0, 00:17:50.951 "state": "configuring", 00:17:50.951 "raid_level": "raid1", 00:17:50.951 "superblock": true, 00:17:50.951 "num_base_bdevs": 3, 00:17:50.951 "num_base_bdevs_discovered": 1, 00:17:50.951 "num_base_bdevs_operational": 2, 00:17:50.951 "base_bdevs_list": [ 00:17:50.951 { 00:17:50.951 "name": null, 00:17:50.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.951 "is_configured": false, 00:17:50.951 "data_offset": 2048, 00:17:50.951 "data_size": 63488 00:17:50.951 }, 00:17:50.951 { 00:17:50.951 "name": "pt2", 00:17:50.951 "uuid": "31612332-e8c7-5180-81ac-08a72ad98230", 00:17:50.951 "is_configured": true, 00:17:50.951 "data_offset": 2048, 00:17:50.951 "data_size": 63488 00:17:50.951 }, 00:17:50.951 { 00:17:50.951 "name": null, 00:17:50.951 "uuid": "2493eac5-3d3a-5f2c-b8e3-8b50f6b58952", 00:17:50.951 "is_configured": false, 00:17:50.951 "data_offset": 2048, 00:17:50.951 "data_size": 63488 00:17:50.951 } 00:17:50.951 ] 00:17:50.951 }' 00:17:50.951 05:15:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:50.951 05:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:51.514 05:15:10 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:51.514 05:15:10 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:51.514 05:15:10 -- bdev/bdev_raid.sh@462 -- # i=2 00:17:51.514 05:15:10 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:51.514 [2024-07-26 05:15:10.574766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:51.514 [2024-07-26 05:15:10.575104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.514 [2024-07-26 05:15:10.575147] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:17:51.514 [2024-07-26 05:15:10.575166] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.514 [2024-07-26 05:15:10.575713] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.515 [2024-07-26 05:15:10.575741] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:51.515 [2024-07-26 05:15:10.575847] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:51.515 [2024-07-26 05:15:10.575877] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:51.515 [2024-07-26 05:15:10.575995] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:17:51.515 [2024-07-26 05:15:10.576013] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:51.515 [2024-07-26 05:15:10.576099] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:17:51.515 [2024-07-26 05:15:10.576503] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:17:51.515 [2024-07-26 05:15:10.576527] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:17:51.515 [2024-07-26 05:15:10.576673] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.515 pt3 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.515 05:15:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.772 05:15:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:51.772 "name": "raid_bdev1", 00:17:51.772 "uuid": "0df1b751-61a3-4fe9-85fe-7f8c4c296f3e", 00:17:51.772 "strip_size_kb": 0, 00:17:51.772 "state": "online", 00:17:51.772 "raid_level": "raid1", 00:17:51.772 "superblock": true, 00:17:51.772 "num_base_bdevs": 3, 00:17:51.772 "num_base_bdevs_discovered": 2, 00:17:51.772 "num_base_bdevs_operational": 2, 00:17:51.772 "base_bdevs_list": [ 00:17:51.772 { 00:17:51.772 "name": null, 00:17:51.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.772 "is_configured": false, 00:17:51.772 "data_offset": 2048, 00:17:51.772 "data_size": 63488 00:17:51.772 }, 00:17:51.772 { 00:17:51.772 "name": "pt2", 00:17:51.772 "uuid": "31612332-e8c7-5180-81ac-08a72ad98230", 00:17:51.772 "is_configured": true, 00:17:51.772 "data_offset": 2048, 00:17:51.772 "data_size": 63488 00:17:51.772 }, 00:17:51.772 { 00:17:51.772 "name": "pt3", 00:17:51.772 "uuid": "2493eac5-3d3a-5f2c-b8e3-8b50f6b58952", 00:17:51.772 "is_configured": true, 00:17:51.772 "data_offset": 2048, 00:17:51.772 "data_size": 63488 00:17:51.772 } 00:17:51.772 ] 00:17:51.772 }' 00:17:51.772 05:15:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:51.772 05:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:52.040 05:15:11 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:17:52.041 05:15:11 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:52.301 [2024-07-26 05:15:11.346946] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.301 [2024-07-26 05:15:11.347186] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.301 [2024-07-26 05:15:11.347283] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.301 [2024-07-26 05:15:11.347362] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.301 [2024-07-26 05:15:11.347378] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:17:52.301 05:15:11 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.301 05:15:11 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:52.559 05:15:11 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:52.559 05:15:11 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:52.559 05:15:11 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.818 [2024-07-26 05:15:11.815045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.818 [2024-07-26 05:15:11.815160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.818 [2024-07-26 05:15:11.815195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:17:52.818 [2024-07-26 05:15:11.815209] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.818 [2024-07-26 05:15:11.817644] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.818 pt1 00:17:52.818 [2024-07-26 05:15:11.817851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.818 [2024-07-26 05:15:11.817998] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:52.818 [2024-07-26 05:15:11.818105] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.818 05:15:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.076 05:15:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.076 "name": "raid_bdev1", 00:17:53.076 "uuid": "0df1b751-61a3-4fe9-85fe-7f8c4c296f3e", 00:17:53.076 "strip_size_kb": 0, 00:17:53.076 "state": "configuring", 00:17:53.076 "raid_level": "raid1", 00:17:53.076 "superblock": true, 00:17:53.076 "num_base_bdevs": 3, 00:17:53.076 "num_base_bdevs_discovered": 1, 00:17:53.076 "num_base_bdevs_operational": 3, 00:17:53.076 "base_bdevs_list": [ 00:17:53.076 { 00:17:53.076 "name": "pt1", 00:17:53.076 "uuid": "269a705e-928e-575d-8d59-382167a2736a", 00:17:53.076 "is_configured": true, 00:17:53.076 "data_offset": 2048, 00:17:53.076 "data_size": 63488 00:17:53.076 }, 00:17:53.076 { 00:17:53.076 "name": null, 00:17:53.076 "uuid": "31612332-e8c7-5180-81ac-08a72ad98230", 00:17:53.076 "is_configured": false, 00:17:53.076 "data_offset": 2048, 00:17:53.076 "data_size": 63488 00:17:53.076 }, 00:17:53.076 { 00:17:53.076 "name": null, 00:17:53.076 "uuid": "2493eac5-3d3a-5f2c-b8e3-8b50f6b58952", 00:17:53.076 "is_configured": false, 00:17:53.076 "data_offset": 2048, 00:17:53.076 "data_size": 63488 00:17:53.076 } 00:17:53.076 ] 00:17:53.076 }' 00:17:53.076 05:15:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.076 05:15:12 -- common/autotest_common.sh@10 -- # set +x 00:17:53.334 05:15:12 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:53.334 05:15:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:53.334 05:15:12 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:53.598 05:15:12 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:53.598 05:15:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:53.598 05:15:12 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:53.857 05:15:12 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:53.857 05:15:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:53.857 05:15:12 -- bdev/bdev_raid.sh@489 -- # i=2 00:17:53.857 05:15:12 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:54.115 [2024-07-26 05:15:13.039471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:54.115 [2024-07-26 05:15:13.039558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.115 [2024-07-26 05:15:13.039592] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:17:54.115 [2024-07-26 05:15:13.039605] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.115 [2024-07-26 05:15:13.040125] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.115 [2024-07-26 05:15:13.040150] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:54.115 [2024-07-26 05:15:13.040284] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:54.115 [2024-07-26 05:15:13.040303] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:54.115 [2024-07-26 05:15:13.040319] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:54.115 [2024-07-26 05:15:13.040344] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b780 name raid_bdev1, state configuring 00:17:54.115 [2024-07-26 05:15:13.040413] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:54.115 pt3 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.115 05:15:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.373 05:15:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.373 "name": "raid_bdev1", 00:17:54.373 "uuid": "0df1b751-61a3-4fe9-85fe-7f8c4c296f3e", 00:17:54.373 "strip_size_kb": 0, 00:17:54.373 "state": "configuring", 00:17:54.373 "raid_level": "raid1", 00:17:54.373 "superblock": true, 00:17:54.373 "num_base_bdevs": 3, 00:17:54.373 "num_base_bdevs_discovered": 1, 00:17:54.373 "num_base_bdevs_operational": 2, 00:17:54.373 "base_bdevs_list": [ 00:17:54.373 { 00:17:54.373 "name": null, 00:17:54.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.373 "is_configured": false, 00:17:54.373 "data_offset": 2048, 00:17:54.373 "data_size": 63488 00:17:54.373 }, 00:17:54.373 { 00:17:54.373 "name": null, 00:17:54.373 "uuid": "31612332-e8c7-5180-81ac-08a72ad98230", 00:17:54.373 "is_configured": false, 00:17:54.373 "data_offset": 2048, 00:17:54.373 "data_size": 63488 00:17:54.373 }, 00:17:54.373 { 00:17:54.373 "name": "pt3", 00:17:54.373 "uuid": "2493eac5-3d3a-5f2c-b8e3-8b50f6b58952", 00:17:54.373 "is_configured": true, 00:17:54.373 "data_offset": 2048, 00:17:54.373 "data_size": 63488 00:17:54.373 } 00:17:54.373 ] 00:17:54.373 }' 00:17:54.373 05:15:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.373 05:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:54.631 05:15:13 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:54.631 05:15:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:54.631 05:15:13 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:54.890 [2024-07-26 05:15:13.815715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:54.890 [2024-07-26 05:15:13.815813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.890 [2024-07-26 05:15:13.815845] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:17:54.890 [2024-07-26 05:15:13.815861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.890 [2024-07-26 05:15:13.816487] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.890 [2024-07-26 05:15:13.816540] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:54.890 [2024-07-26 05:15:13.816648] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:54.890 [2024-07-26 05:15:13.816683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.890 [2024-07-26 05:15:13.816844] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:17:54.890 [2024-07-26 05:15:13.816865] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:54.890 [2024-07-26 05:15:13.817027] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:17:54.890 [2024-07-26 05:15:13.817490] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:17:54.890 [2024-07-26 05:15:13.817529] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:17:54.890 [2024-07-26 05:15:13.817678] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.890 pt2 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.890 05:15:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.149 05:15:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.149 "name": "raid_bdev1", 00:17:55.149 "uuid": "0df1b751-61a3-4fe9-85fe-7f8c4c296f3e", 00:17:55.149 "strip_size_kb": 0, 00:17:55.149 "state": "online", 00:17:55.149 "raid_level": "raid1", 00:17:55.149 "superblock": true, 00:17:55.149 "num_base_bdevs": 3, 00:17:55.149 "num_base_bdevs_discovered": 2, 00:17:55.149 "num_base_bdevs_operational": 2, 00:17:55.149 "base_bdevs_list": [ 00:17:55.149 { 00:17:55.149 "name": null, 00:17:55.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.149 "is_configured": false, 00:17:55.149 "data_offset": 2048, 00:17:55.149 "data_size": 63488 00:17:55.149 }, 00:17:55.149 { 00:17:55.149 "name": "pt2", 00:17:55.149 "uuid": "31612332-e8c7-5180-81ac-08a72ad98230", 00:17:55.149 "is_configured": true, 00:17:55.149 "data_offset": 2048, 00:17:55.149 "data_size": 63488 00:17:55.149 }, 00:17:55.149 { 00:17:55.149 "name": "pt3", 00:17:55.149 "uuid": "2493eac5-3d3a-5f2c-b8e3-8b50f6b58952", 00:17:55.149 "is_configured": true, 00:17:55.149 "data_offset": 2048, 00:17:55.149 "data_size": 63488 00:17:55.149 } 00:17:55.149 ] 00:17:55.149 }' 00:17:55.149 05:15:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.149 05:15:14 -- common/autotest_common.sh@10 -- # set +x 00:17:55.407 05:15:14 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:55.407 05:15:14 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:55.665 [2024-07-26 05:15:14.588172] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.665 05:15:14 -- bdev/bdev_raid.sh@506 -- # '[' 0df1b751-61a3-4fe9-85fe-7f8c4c296f3e '!=' 0df1b751-61a3-4fe9-85fe-7f8c4c296f3e ']' 00:17:55.665 05:15:14 -- bdev/bdev_raid.sh@511 -- # killprocess 73960 00:17:55.665 05:15:14 -- common/autotest_common.sh@926 -- # '[' -z 73960 ']' 00:17:55.665 05:15:14 -- common/autotest_common.sh@930 -- # kill -0 73960 00:17:55.665 05:15:14 -- common/autotest_common.sh@931 -- # uname 00:17:55.665 05:15:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:55.665 05:15:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73960 00:17:55.665 killing process with pid 73960 00:17:55.665 05:15:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:55.665 05:15:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:55.665 05:15:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73960' 00:17:55.665 05:15:14 -- common/autotest_common.sh@945 -- # kill 73960 00:17:55.665 [2024-07-26 05:15:14.639853] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.665 05:15:14 -- common/autotest_common.sh@950 -- # wait 73960 00:17:55.665 [2024-07-26 05:15:14.639943] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.665 [2024-07-26 05:15:14.640030] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.665 [2024-07-26 05:15:14.640046] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:17:55.923 [2024-07-26 05:15:14.853402] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:56.888 ************************************ 00:17:56.888 END TEST raid_superblock_test 00:17:56.888 ************************************ 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:56.888 00:17:56.888 real 0m16.540s 00:17:56.888 user 0m28.594s 00:17:56.888 sys 0m2.482s 00:17:56.888 05:15:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:56.888 05:15:15 -- common/autotest_common.sh@10 -- # set +x 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:56.888 05:15:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:56.888 05:15:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:56.888 05:15:15 -- common/autotest_common.sh@10 -- # set +x 00:17:56.888 ************************************ 00:17:56.888 START TEST raid_state_function_test 00:17:56.888 ************************************ 00:17:56.888 05:15:15 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:56.888 Process raid pid: 74511 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@226 -- # raid_pid=74511 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 74511' 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:56.888 05:15:15 -- bdev/bdev_raid.sh@228 -- # waitforlisten 74511 /var/tmp/spdk-raid.sock 00:17:56.888 05:15:15 -- common/autotest_common.sh@819 -- # '[' -z 74511 ']' 00:17:56.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:56.888 05:15:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:56.888 05:15:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:56.888 05:15:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:56.888 05:15:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:56.888 05:15:15 -- common/autotest_common.sh@10 -- # set +x 00:17:57.169 [2024-07-26 05:15:16.020184] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:57.169 [2024-07-26 05:15:16.020571] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.169 [2024-07-26 05:15:16.195708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.427 [2024-07-26 05:15:16.431067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.686 [2024-07-26 05:15:16.590515] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.944 05:15:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:57.944 05:15:16 -- common/autotest_common.sh@852 -- # return 0 00:17:57.944 05:15:16 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:58.200 [2024-07-26 05:15:17.179052] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:58.200 [2024-07-26 05:15:17.179167] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:58.201 [2024-07-26 05:15:17.179186] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:58.201 [2024-07-26 05:15:17.179203] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:58.201 [2024-07-26 05:15:17.179223] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:58.201 [2024-07-26 05:15:17.179237] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:58.201 [2024-07-26 05:15:17.179261] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:58.201 [2024-07-26 05:15:17.179274] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.201 05:15:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.459 05:15:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.459 "name": "Existed_Raid", 00:17:58.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.459 "strip_size_kb": 64, 00:17:58.459 "state": "configuring", 00:17:58.459 "raid_level": "raid0", 00:17:58.459 "superblock": false, 00:17:58.459 "num_base_bdevs": 4, 00:17:58.459 "num_base_bdevs_discovered": 0, 00:17:58.459 "num_base_bdevs_operational": 4, 00:17:58.459 "base_bdevs_list": [ 00:17:58.459 { 00:17:58.459 "name": "BaseBdev1", 00:17:58.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.459 "is_configured": false, 00:17:58.459 "data_offset": 0, 00:17:58.459 "data_size": 0 00:17:58.459 }, 00:17:58.459 { 00:17:58.459 "name": "BaseBdev2", 00:17:58.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.459 "is_configured": false, 00:17:58.459 "data_offset": 0, 00:17:58.459 "data_size": 0 00:17:58.459 }, 00:17:58.459 { 00:17:58.459 "name": "BaseBdev3", 00:17:58.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.459 "is_configured": false, 00:17:58.459 "data_offset": 0, 00:17:58.459 "data_size": 0 00:17:58.459 }, 00:17:58.459 { 00:17:58.459 "name": "BaseBdev4", 00:17:58.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.459 "is_configured": false, 00:17:58.459 "data_offset": 0, 00:17:58.459 "data_size": 0 00:17:58.459 } 00:17:58.459 ] 00:17:58.459 }' 00:17:58.459 05:15:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.459 05:15:17 -- common/autotest_common.sh@10 -- # set +x 00:17:58.717 05:15:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:58.976 [2024-07-26 05:15:17.955171] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:58.976 [2024-07-26 05:15:17.955219] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:58.976 05:15:17 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:59.235 [2024-07-26 05:15:18.203234] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:59.235 [2024-07-26 05:15:18.203340] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:59.235 [2024-07-26 05:15:18.203354] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.235 [2024-07-26 05:15:18.203369] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.235 [2024-07-26 05:15:18.203377] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:59.235 [2024-07-26 05:15:18.203389] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:59.235 [2024-07-26 05:15:18.203397] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:59.235 [2024-07-26 05:15:18.203409] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:59.235 05:15:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:59.493 [2024-07-26 05:15:18.447968] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:59.493 BaseBdev1 00:17:59.493 05:15:18 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:59.493 05:15:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:59.493 05:15:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:59.493 05:15:18 -- common/autotest_common.sh@889 -- # local i 00:17:59.493 05:15:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:59.493 05:15:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:59.493 05:15:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.752 05:15:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:00.010 [ 00:18:00.010 { 00:18:00.010 "name": "BaseBdev1", 00:18:00.010 "aliases": [ 00:18:00.010 "0c5e1816-7053-4036-92d2-69c933084d26" 00:18:00.010 ], 00:18:00.010 "product_name": "Malloc disk", 00:18:00.010 "block_size": 512, 00:18:00.010 "num_blocks": 65536, 00:18:00.010 "uuid": "0c5e1816-7053-4036-92d2-69c933084d26", 00:18:00.010 "assigned_rate_limits": { 00:18:00.010 "rw_ios_per_sec": 0, 00:18:00.010 "rw_mbytes_per_sec": 0, 00:18:00.010 "r_mbytes_per_sec": 0, 00:18:00.010 "w_mbytes_per_sec": 0 00:18:00.010 }, 00:18:00.010 "claimed": true, 00:18:00.010 "claim_type": "exclusive_write", 00:18:00.010 "zoned": false, 00:18:00.010 "supported_io_types": { 00:18:00.010 "read": true, 00:18:00.010 "write": true, 00:18:00.010 "unmap": true, 00:18:00.010 "write_zeroes": true, 00:18:00.010 "flush": true, 00:18:00.010 "reset": true, 00:18:00.010 "compare": false, 00:18:00.010 "compare_and_write": false, 00:18:00.010 "abort": true, 00:18:00.010 "nvme_admin": false, 00:18:00.010 "nvme_io": false 00:18:00.010 }, 00:18:00.010 "memory_domains": [ 00:18:00.010 { 00:18:00.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.010 "dma_device_type": 2 00:18:00.010 } 00:18:00.010 ], 00:18:00.010 "driver_specific": {} 00:18:00.010 } 00:18:00.010 ] 00:18:00.010 05:15:18 -- common/autotest_common.sh@895 -- # return 0 00:18:00.010 05:15:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:00.010 05:15:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:00.010 05:15:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:00.010 05:15:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:00.010 05:15:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:00.010 05:15:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:00.010 05:15:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.010 05:15:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.010 05:15:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.010 05:15:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.011 05:15:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.011 05:15:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.270 05:15:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.270 "name": "Existed_Raid", 00:18:00.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.270 "strip_size_kb": 64, 00:18:00.270 "state": "configuring", 00:18:00.270 "raid_level": "raid0", 00:18:00.270 "superblock": false, 00:18:00.270 "num_base_bdevs": 4, 00:18:00.270 "num_base_bdevs_discovered": 1, 00:18:00.270 "num_base_bdevs_operational": 4, 00:18:00.270 "base_bdevs_list": [ 00:18:00.270 { 00:18:00.270 "name": "BaseBdev1", 00:18:00.270 "uuid": "0c5e1816-7053-4036-92d2-69c933084d26", 00:18:00.270 "is_configured": true, 00:18:00.270 "data_offset": 0, 00:18:00.270 "data_size": 65536 00:18:00.270 }, 00:18:00.270 { 00:18:00.270 "name": "BaseBdev2", 00:18:00.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.270 "is_configured": false, 00:18:00.270 "data_offset": 0, 00:18:00.270 "data_size": 0 00:18:00.270 }, 00:18:00.270 { 00:18:00.270 "name": "BaseBdev3", 00:18:00.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.270 "is_configured": false, 00:18:00.270 "data_offset": 0, 00:18:00.270 "data_size": 0 00:18:00.270 }, 00:18:00.270 { 00:18:00.270 "name": "BaseBdev4", 00:18:00.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.270 "is_configured": false, 00:18:00.270 "data_offset": 0, 00:18:00.270 "data_size": 0 00:18:00.270 } 00:18:00.270 ] 00:18:00.270 }' 00:18:00.270 05:15:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.270 05:15:19 -- common/autotest_common.sh@10 -- # set +x 00:18:00.528 05:15:19 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:00.787 [2024-07-26 05:15:19.664472] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:00.787 [2024-07-26 05:15:19.664526] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:18:00.787 05:15:19 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:00.787 05:15:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:01.046 [2024-07-26 05:15:19.920576] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.046 [2024-07-26 05:15:19.922756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.046 [2024-07-26 05:15:19.922823] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.046 [2024-07-26 05:15:19.922838] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:01.046 [2024-07-26 05:15:19.922852] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:01.046 [2024-07-26 05:15:19.922860] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:01.046 [2024-07-26 05:15:19.922874] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.046 05:15:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.305 05:15:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.305 "name": "Existed_Raid", 00:18:01.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.305 "strip_size_kb": 64, 00:18:01.305 "state": "configuring", 00:18:01.305 "raid_level": "raid0", 00:18:01.305 "superblock": false, 00:18:01.305 "num_base_bdevs": 4, 00:18:01.305 "num_base_bdevs_discovered": 1, 00:18:01.305 "num_base_bdevs_operational": 4, 00:18:01.305 "base_bdevs_list": [ 00:18:01.305 { 00:18:01.305 "name": "BaseBdev1", 00:18:01.305 "uuid": "0c5e1816-7053-4036-92d2-69c933084d26", 00:18:01.305 "is_configured": true, 00:18:01.305 "data_offset": 0, 00:18:01.305 "data_size": 65536 00:18:01.305 }, 00:18:01.305 { 00:18:01.305 "name": "BaseBdev2", 00:18:01.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.305 "is_configured": false, 00:18:01.305 "data_offset": 0, 00:18:01.305 "data_size": 0 00:18:01.305 }, 00:18:01.305 { 00:18:01.305 "name": "BaseBdev3", 00:18:01.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.305 "is_configured": false, 00:18:01.305 "data_offset": 0, 00:18:01.305 "data_size": 0 00:18:01.305 }, 00:18:01.305 { 00:18:01.305 "name": "BaseBdev4", 00:18:01.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.305 "is_configured": false, 00:18:01.305 "data_offset": 0, 00:18:01.305 "data_size": 0 00:18:01.305 } 00:18:01.305 ] 00:18:01.305 }' 00:18:01.305 05:15:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.305 05:15:20 -- common/autotest_common.sh@10 -- # set +x 00:18:01.563 05:15:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:01.821 [2024-07-26 05:15:20.749282] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.821 BaseBdev2 00:18:01.821 05:15:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:01.821 05:15:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:01.821 05:15:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:01.821 05:15:20 -- common/autotest_common.sh@889 -- # local i 00:18:01.821 05:15:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:01.821 05:15:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:01.821 05:15:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:02.079 05:15:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:02.337 [ 00:18:02.337 { 00:18:02.337 "name": "BaseBdev2", 00:18:02.337 "aliases": [ 00:18:02.337 "6a6452e1-891e-4748-b984-d3959155c191" 00:18:02.337 ], 00:18:02.337 "product_name": "Malloc disk", 00:18:02.337 "block_size": 512, 00:18:02.337 "num_blocks": 65536, 00:18:02.337 "uuid": "6a6452e1-891e-4748-b984-d3959155c191", 00:18:02.337 "assigned_rate_limits": { 00:18:02.337 "rw_ios_per_sec": 0, 00:18:02.337 "rw_mbytes_per_sec": 0, 00:18:02.337 "r_mbytes_per_sec": 0, 00:18:02.337 "w_mbytes_per_sec": 0 00:18:02.337 }, 00:18:02.337 "claimed": true, 00:18:02.337 "claim_type": "exclusive_write", 00:18:02.337 "zoned": false, 00:18:02.337 "supported_io_types": { 00:18:02.337 "read": true, 00:18:02.337 "write": true, 00:18:02.337 "unmap": true, 00:18:02.337 "write_zeroes": true, 00:18:02.337 "flush": true, 00:18:02.337 "reset": true, 00:18:02.337 "compare": false, 00:18:02.337 "compare_and_write": false, 00:18:02.337 "abort": true, 00:18:02.337 "nvme_admin": false, 00:18:02.337 "nvme_io": false 00:18:02.337 }, 00:18:02.337 "memory_domains": [ 00:18:02.337 { 00:18:02.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.337 "dma_device_type": 2 00:18:02.337 } 00:18:02.337 ], 00:18:02.337 "driver_specific": {} 00:18:02.337 } 00:18:02.337 ] 00:18:02.337 05:15:21 -- common/autotest_common.sh@895 -- # return 0 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.337 05:15:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.596 05:15:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:02.596 "name": "Existed_Raid", 00:18:02.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.596 "strip_size_kb": 64, 00:18:02.596 "state": "configuring", 00:18:02.596 "raid_level": "raid0", 00:18:02.596 "superblock": false, 00:18:02.596 "num_base_bdevs": 4, 00:18:02.596 "num_base_bdevs_discovered": 2, 00:18:02.596 "num_base_bdevs_operational": 4, 00:18:02.596 "base_bdevs_list": [ 00:18:02.596 { 00:18:02.596 "name": "BaseBdev1", 00:18:02.596 "uuid": "0c5e1816-7053-4036-92d2-69c933084d26", 00:18:02.596 "is_configured": true, 00:18:02.596 "data_offset": 0, 00:18:02.596 "data_size": 65536 00:18:02.596 }, 00:18:02.596 { 00:18:02.596 "name": "BaseBdev2", 00:18:02.596 "uuid": "6a6452e1-891e-4748-b984-d3959155c191", 00:18:02.596 "is_configured": true, 00:18:02.596 "data_offset": 0, 00:18:02.596 "data_size": 65536 00:18:02.596 }, 00:18:02.596 { 00:18:02.596 "name": "BaseBdev3", 00:18:02.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.596 "is_configured": false, 00:18:02.596 "data_offset": 0, 00:18:02.596 "data_size": 0 00:18:02.596 }, 00:18:02.596 { 00:18:02.596 "name": "BaseBdev4", 00:18:02.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.596 "is_configured": false, 00:18:02.596 "data_offset": 0, 00:18:02.596 "data_size": 0 00:18:02.596 } 00:18:02.596 ] 00:18:02.596 }' 00:18:02.596 05:15:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:02.596 05:15:21 -- common/autotest_common.sh@10 -- # set +x 00:18:02.854 05:15:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:03.113 [2024-07-26 05:15:21.971959] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:03.113 BaseBdev3 00:18:03.113 05:15:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:03.113 05:15:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:03.113 05:15:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:03.113 05:15:21 -- common/autotest_common.sh@889 -- # local i 00:18:03.113 05:15:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:03.113 05:15:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:03.113 05:15:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:03.371 05:15:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:03.630 [ 00:18:03.630 { 00:18:03.630 "name": "BaseBdev3", 00:18:03.630 "aliases": [ 00:18:03.630 "94fdabf6-4ae7-4613-b802-6d7d06a01596" 00:18:03.630 ], 00:18:03.630 "product_name": "Malloc disk", 00:18:03.630 "block_size": 512, 00:18:03.630 "num_blocks": 65536, 00:18:03.630 "uuid": "94fdabf6-4ae7-4613-b802-6d7d06a01596", 00:18:03.630 "assigned_rate_limits": { 00:18:03.630 "rw_ios_per_sec": 0, 00:18:03.630 "rw_mbytes_per_sec": 0, 00:18:03.630 "r_mbytes_per_sec": 0, 00:18:03.630 "w_mbytes_per_sec": 0 00:18:03.630 }, 00:18:03.630 "claimed": true, 00:18:03.630 "claim_type": "exclusive_write", 00:18:03.630 "zoned": false, 00:18:03.630 "supported_io_types": { 00:18:03.630 "read": true, 00:18:03.630 "write": true, 00:18:03.630 "unmap": true, 00:18:03.630 "write_zeroes": true, 00:18:03.630 "flush": true, 00:18:03.630 "reset": true, 00:18:03.630 "compare": false, 00:18:03.630 "compare_and_write": false, 00:18:03.630 "abort": true, 00:18:03.630 "nvme_admin": false, 00:18:03.630 "nvme_io": false 00:18:03.630 }, 00:18:03.630 "memory_domains": [ 00:18:03.630 { 00:18:03.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.630 "dma_device_type": 2 00:18:03.630 } 00:18:03.630 ], 00:18:03.630 "driver_specific": {} 00:18:03.630 } 00:18:03.630 ] 00:18:03.630 05:15:22 -- common/autotest_common.sh@895 -- # return 0 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.630 05:15:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.888 05:15:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.888 "name": "Existed_Raid", 00:18:03.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.888 "strip_size_kb": 64, 00:18:03.888 "state": "configuring", 00:18:03.888 "raid_level": "raid0", 00:18:03.888 "superblock": false, 00:18:03.888 "num_base_bdevs": 4, 00:18:03.888 "num_base_bdevs_discovered": 3, 00:18:03.888 "num_base_bdevs_operational": 4, 00:18:03.888 "base_bdevs_list": [ 00:18:03.888 { 00:18:03.888 "name": "BaseBdev1", 00:18:03.888 "uuid": "0c5e1816-7053-4036-92d2-69c933084d26", 00:18:03.888 "is_configured": true, 00:18:03.888 "data_offset": 0, 00:18:03.888 "data_size": 65536 00:18:03.888 }, 00:18:03.888 { 00:18:03.888 "name": "BaseBdev2", 00:18:03.888 "uuid": "6a6452e1-891e-4748-b984-d3959155c191", 00:18:03.888 "is_configured": true, 00:18:03.888 "data_offset": 0, 00:18:03.888 "data_size": 65536 00:18:03.888 }, 00:18:03.888 { 00:18:03.888 "name": "BaseBdev3", 00:18:03.888 "uuid": "94fdabf6-4ae7-4613-b802-6d7d06a01596", 00:18:03.888 "is_configured": true, 00:18:03.888 "data_offset": 0, 00:18:03.888 "data_size": 65536 00:18:03.888 }, 00:18:03.888 { 00:18:03.888 "name": "BaseBdev4", 00:18:03.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.888 "is_configured": false, 00:18:03.888 "data_offset": 0, 00:18:03.888 "data_size": 0 00:18:03.888 } 00:18:03.888 ] 00:18:03.888 }' 00:18:03.888 05:15:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.888 05:15:22 -- common/autotest_common.sh@10 -- # set +x 00:18:04.147 05:15:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:04.413 [2024-07-26 05:15:23.298182] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:04.413 [2024-07-26 05:15:23.298403] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:18:04.413 [2024-07-26 05:15:23.298453] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:04.413 [2024-07-26 05:15:23.298609] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:18:04.413 [2024-07-26 05:15:23.299036] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:18:04.413 [2024-07-26 05:15:23.299057] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:18:04.413 [2024-07-26 05:15:23.299417] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.413 BaseBdev4 00:18:04.413 05:15:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:04.413 05:15:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:04.413 05:15:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:04.413 05:15:23 -- common/autotest_common.sh@889 -- # local i 00:18:04.413 05:15:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:04.413 05:15:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:04.413 05:15:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.678 05:15:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:04.678 [ 00:18:04.678 { 00:18:04.678 "name": "BaseBdev4", 00:18:04.678 "aliases": [ 00:18:04.678 "547b294e-0efe-4073-b15d-623eb4c0888f" 00:18:04.678 ], 00:18:04.678 "product_name": "Malloc disk", 00:18:04.678 "block_size": 512, 00:18:04.678 "num_blocks": 65536, 00:18:04.678 "uuid": "547b294e-0efe-4073-b15d-623eb4c0888f", 00:18:04.678 "assigned_rate_limits": { 00:18:04.678 "rw_ios_per_sec": 0, 00:18:04.678 "rw_mbytes_per_sec": 0, 00:18:04.678 "r_mbytes_per_sec": 0, 00:18:04.678 "w_mbytes_per_sec": 0 00:18:04.678 }, 00:18:04.678 "claimed": true, 00:18:04.678 "claim_type": "exclusive_write", 00:18:04.678 "zoned": false, 00:18:04.678 "supported_io_types": { 00:18:04.678 "read": true, 00:18:04.678 "write": true, 00:18:04.678 "unmap": true, 00:18:04.678 "write_zeroes": true, 00:18:04.678 "flush": true, 00:18:04.678 "reset": true, 00:18:04.678 "compare": false, 00:18:04.678 "compare_and_write": false, 00:18:04.678 "abort": true, 00:18:04.678 "nvme_admin": false, 00:18:04.678 "nvme_io": false 00:18:04.678 }, 00:18:04.678 "memory_domains": [ 00:18:04.678 { 00:18:04.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.678 "dma_device_type": 2 00:18:04.678 } 00:18:04.678 ], 00:18:04.678 "driver_specific": {} 00:18:04.678 } 00:18:04.678 ] 00:18:04.678 05:15:23 -- common/autotest_common.sh@895 -- # return 0 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.678 05:15:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.937 05:15:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.937 "name": "Existed_Raid", 00:18:04.937 "uuid": "158079ff-ba60-4c6f-a33e-f30fb22ba20e", 00:18:04.937 "strip_size_kb": 64, 00:18:04.937 "state": "online", 00:18:04.937 "raid_level": "raid0", 00:18:04.937 "superblock": false, 00:18:04.937 "num_base_bdevs": 4, 00:18:04.937 "num_base_bdevs_discovered": 4, 00:18:04.937 "num_base_bdevs_operational": 4, 00:18:04.937 "base_bdevs_list": [ 00:18:04.937 { 00:18:04.937 "name": "BaseBdev1", 00:18:04.937 "uuid": "0c5e1816-7053-4036-92d2-69c933084d26", 00:18:04.937 "is_configured": true, 00:18:04.937 "data_offset": 0, 00:18:04.937 "data_size": 65536 00:18:04.937 }, 00:18:04.937 { 00:18:04.937 "name": "BaseBdev2", 00:18:04.937 "uuid": "6a6452e1-891e-4748-b984-d3959155c191", 00:18:04.937 "is_configured": true, 00:18:04.937 "data_offset": 0, 00:18:04.937 "data_size": 65536 00:18:04.937 }, 00:18:04.937 { 00:18:04.937 "name": "BaseBdev3", 00:18:04.937 "uuid": "94fdabf6-4ae7-4613-b802-6d7d06a01596", 00:18:04.937 "is_configured": true, 00:18:04.937 "data_offset": 0, 00:18:04.937 "data_size": 65536 00:18:04.937 }, 00:18:04.937 { 00:18:04.937 "name": "BaseBdev4", 00:18:04.937 "uuid": "547b294e-0efe-4073-b15d-623eb4c0888f", 00:18:04.937 "is_configured": true, 00:18:04.937 "data_offset": 0, 00:18:04.937 "data_size": 65536 00:18:04.937 } 00:18:04.937 ] 00:18:04.937 }' 00:18:04.937 05:15:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.937 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.504 05:15:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:05.504 [2024-07-26 05:15:24.546739] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:05.504 [2024-07-26 05:15:24.546944] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.504 [2024-07-26 05:15:24.547149] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.763 "name": "Existed_Raid", 00:18:05.763 "uuid": "158079ff-ba60-4c6f-a33e-f30fb22ba20e", 00:18:05.763 "strip_size_kb": 64, 00:18:05.763 "state": "offline", 00:18:05.763 "raid_level": "raid0", 00:18:05.763 "superblock": false, 00:18:05.763 "num_base_bdevs": 4, 00:18:05.763 "num_base_bdevs_discovered": 3, 00:18:05.763 "num_base_bdevs_operational": 3, 00:18:05.763 "base_bdevs_list": [ 00:18:05.763 { 00:18:05.763 "name": null, 00:18:05.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.763 "is_configured": false, 00:18:05.763 "data_offset": 0, 00:18:05.763 "data_size": 65536 00:18:05.763 }, 00:18:05.763 { 00:18:05.763 "name": "BaseBdev2", 00:18:05.763 "uuid": "6a6452e1-891e-4748-b984-d3959155c191", 00:18:05.763 "is_configured": true, 00:18:05.763 "data_offset": 0, 00:18:05.763 "data_size": 65536 00:18:05.763 }, 00:18:05.763 { 00:18:05.763 "name": "BaseBdev3", 00:18:05.763 "uuid": "94fdabf6-4ae7-4613-b802-6d7d06a01596", 00:18:05.763 "is_configured": true, 00:18:05.763 "data_offset": 0, 00:18:05.763 "data_size": 65536 00:18:05.763 }, 00:18:05.763 { 00:18:05.763 "name": "BaseBdev4", 00:18:05.763 "uuid": "547b294e-0efe-4073-b15d-623eb4c0888f", 00:18:05.763 "is_configured": true, 00:18:05.763 "data_offset": 0, 00:18:05.763 "data_size": 65536 00:18:05.763 } 00:18:05.763 ] 00:18:05.763 }' 00:18:05.763 05:15:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.763 05:15:24 -- common/autotest_common.sh@10 -- # set +x 00:18:06.330 05:15:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:06.330 05:15:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:06.330 05:15:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.330 05:15:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:06.330 05:15:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:06.330 05:15:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:06.330 05:15:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:06.589 [2024-07-26 05:15:25.635226] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:06.847 05:15:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:06.847 05:15:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:06.848 05:15:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.848 05:15:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:06.848 05:15:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:06.848 05:15:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:06.848 05:15:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:07.106 [2024-07-26 05:15:26.129856] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:07.372 05:15:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:07.372 05:15:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:07.372 05:15:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.372 05:15:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:07.372 05:15:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:07.372 05:15:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.372 05:15:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:07.669 [2024-07-26 05:15:26.622340] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:07.669 [2024-07-26 05:15:26.622408] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:18:07.669 05:15:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:07.669 05:15:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:07.669 05:15:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.669 05:15:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:07.940 05:15:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:07.940 05:15:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:07.940 05:15:26 -- bdev/bdev_raid.sh@287 -- # killprocess 74511 00:18:07.940 05:15:26 -- common/autotest_common.sh@926 -- # '[' -z 74511 ']' 00:18:07.940 05:15:26 -- common/autotest_common.sh@930 -- # kill -0 74511 00:18:07.940 05:15:26 -- common/autotest_common.sh@931 -- # uname 00:18:07.940 05:15:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:07.940 05:15:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74511 00:18:07.940 killing process with pid 74511 00:18:07.940 05:15:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:07.940 05:15:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:07.940 05:15:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74511' 00:18:07.940 05:15:26 -- common/autotest_common.sh@945 -- # kill 74511 00:18:07.940 [2024-07-26 05:15:26.961460] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.940 05:15:26 -- common/autotest_common.sh@950 -- # wait 74511 00:18:07.940 [2024-07-26 05:15:26.961573] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:09.319 00:18:09.319 real 0m12.069s 00:18:09.319 user 0m20.219s 00:18:09.319 sys 0m1.805s 00:18:09.319 05:15:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.319 ************************************ 00:18:09.319 END TEST raid_state_function_test 00:18:09.319 ************************************ 00:18:09.319 05:15:28 -- common/autotest_common.sh@10 -- # set +x 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:09.319 05:15:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:09.319 05:15:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:09.319 05:15:28 -- common/autotest_common.sh@10 -- # set +x 00:18:09.319 ************************************ 00:18:09.319 START TEST raid_state_function_test_sb 00:18:09.319 ************************************ 00:18:09.319 05:15:28 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:09.319 Process raid pid: 74910 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@226 -- # raid_pid=74910 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 74910' 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@228 -- # waitforlisten 74910 /var/tmp/spdk-raid.sock 00:18:09.319 05:15:28 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:09.319 05:15:28 -- common/autotest_common.sh@819 -- # '[' -z 74910 ']' 00:18:09.319 05:15:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:09.319 05:15:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:09.319 05:15:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:09.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:09.319 05:15:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:09.319 05:15:28 -- common/autotest_common.sh@10 -- # set +x 00:18:09.319 [2024-07-26 05:15:28.150935] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:09.319 [2024-07-26 05:15:28.151127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.319 [2024-07-26 05:15:28.324827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.578 [2024-07-26 05:15:28.489783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.578 [2024-07-26 05:15:28.646014] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.144 05:15:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:10.144 05:15:29 -- common/autotest_common.sh@852 -- # return 0 00:18:10.144 05:15:29 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:10.403 [2024-07-26 05:15:29.277264] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:10.403 [2024-07-26 05:15:29.277334] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:10.403 [2024-07-26 05:15:29.277350] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:10.403 [2024-07-26 05:15:29.277366] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:10.403 [2024-07-26 05:15:29.277376] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:10.403 [2024-07-26 05:15:29.277390] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:10.403 [2024-07-26 05:15:29.277398] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:10.403 [2024-07-26 05:15:29.277412] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.403 05:15:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.661 05:15:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.661 "name": "Existed_Raid", 00:18:10.661 "uuid": "1d8d6ecf-eecb-4f3b-8d46-efdf673888c6", 00:18:10.661 "strip_size_kb": 64, 00:18:10.661 "state": "configuring", 00:18:10.661 "raid_level": "raid0", 00:18:10.661 "superblock": true, 00:18:10.661 "num_base_bdevs": 4, 00:18:10.661 "num_base_bdevs_discovered": 0, 00:18:10.661 "num_base_bdevs_operational": 4, 00:18:10.661 "base_bdevs_list": [ 00:18:10.661 { 00:18:10.661 "name": "BaseBdev1", 00:18:10.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.661 "is_configured": false, 00:18:10.661 "data_offset": 0, 00:18:10.661 "data_size": 0 00:18:10.661 }, 00:18:10.661 { 00:18:10.661 "name": "BaseBdev2", 00:18:10.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.661 "is_configured": false, 00:18:10.661 "data_offset": 0, 00:18:10.661 "data_size": 0 00:18:10.661 }, 00:18:10.661 { 00:18:10.661 "name": "BaseBdev3", 00:18:10.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.662 "is_configured": false, 00:18:10.662 "data_offset": 0, 00:18:10.662 "data_size": 0 00:18:10.662 }, 00:18:10.662 { 00:18:10.662 "name": "BaseBdev4", 00:18:10.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.662 "is_configured": false, 00:18:10.662 "data_offset": 0, 00:18:10.662 "data_size": 0 00:18:10.662 } 00:18:10.662 ] 00:18:10.662 }' 00:18:10.662 05:15:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.662 05:15:29 -- common/autotest_common.sh@10 -- # set +x 00:18:10.920 05:15:29 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:11.178 [2024-07-26 05:15:30.049307] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:11.178 [2024-07-26 05:15:30.049584] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:18:11.178 05:15:30 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:11.178 [2024-07-26 05:15:30.257437] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.178 [2024-07-26 05:15:30.257513] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.178 [2024-07-26 05:15:30.257527] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:11.178 [2024-07-26 05:15:30.257542] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:11.178 [2024-07-26 05:15:30.257550] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:11.178 [2024-07-26 05:15:30.257563] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:11.178 [2024-07-26 05:15:30.257571] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:11.178 [2024-07-26 05:15:30.257583] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:11.178 05:15:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:11.436 [2024-07-26 05:15:30.503045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.436 BaseBdev1 00:18:11.436 05:15:30 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:11.436 05:15:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:11.436 05:15:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:11.436 05:15:30 -- common/autotest_common.sh@889 -- # local i 00:18:11.436 05:15:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:11.436 05:15:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:11.436 05:15:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:11.694 05:15:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:11.952 [ 00:18:11.952 { 00:18:11.952 "name": "BaseBdev1", 00:18:11.952 "aliases": [ 00:18:11.952 "5079bd45-91cf-4641-8013-ef236e2dbcb3" 00:18:11.952 ], 00:18:11.952 "product_name": "Malloc disk", 00:18:11.952 "block_size": 512, 00:18:11.952 "num_blocks": 65536, 00:18:11.952 "uuid": "5079bd45-91cf-4641-8013-ef236e2dbcb3", 00:18:11.952 "assigned_rate_limits": { 00:18:11.952 "rw_ios_per_sec": 0, 00:18:11.952 "rw_mbytes_per_sec": 0, 00:18:11.952 "r_mbytes_per_sec": 0, 00:18:11.952 "w_mbytes_per_sec": 0 00:18:11.952 }, 00:18:11.952 "claimed": true, 00:18:11.952 "claim_type": "exclusive_write", 00:18:11.952 "zoned": false, 00:18:11.952 "supported_io_types": { 00:18:11.952 "read": true, 00:18:11.952 "write": true, 00:18:11.952 "unmap": true, 00:18:11.952 "write_zeroes": true, 00:18:11.952 "flush": true, 00:18:11.952 "reset": true, 00:18:11.952 "compare": false, 00:18:11.952 "compare_and_write": false, 00:18:11.952 "abort": true, 00:18:11.952 "nvme_admin": false, 00:18:11.952 "nvme_io": false 00:18:11.952 }, 00:18:11.952 "memory_domains": [ 00:18:11.952 { 00:18:11.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.952 "dma_device_type": 2 00:18:11.952 } 00:18:11.952 ], 00:18:11.952 "driver_specific": {} 00:18:11.952 } 00:18:11.952 ] 00:18:11.952 05:15:30 -- common/autotest_common.sh@895 -- # return 0 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.952 05:15:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.210 05:15:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.210 "name": "Existed_Raid", 00:18:12.210 "uuid": "bed02908-b16e-4575-9b1e-333aaea1cd1a", 00:18:12.210 "strip_size_kb": 64, 00:18:12.210 "state": "configuring", 00:18:12.210 "raid_level": "raid0", 00:18:12.210 "superblock": true, 00:18:12.210 "num_base_bdevs": 4, 00:18:12.210 "num_base_bdevs_discovered": 1, 00:18:12.210 "num_base_bdevs_operational": 4, 00:18:12.210 "base_bdevs_list": [ 00:18:12.210 { 00:18:12.210 "name": "BaseBdev1", 00:18:12.210 "uuid": "5079bd45-91cf-4641-8013-ef236e2dbcb3", 00:18:12.210 "is_configured": true, 00:18:12.210 "data_offset": 2048, 00:18:12.210 "data_size": 63488 00:18:12.210 }, 00:18:12.210 { 00:18:12.210 "name": "BaseBdev2", 00:18:12.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.210 "is_configured": false, 00:18:12.210 "data_offset": 0, 00:18:12.210 "data_size": 0 00:18:12.210 }, 00:18:12.210 { 00:18:12.210 "name": "BaseBdev3", 00:18:12.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.210 "is_configured": false, 00:18:12.210 "data_offset": 0, 00:18:12.210 "data_size": 0 00:18:12.210 }, 00:18:12.210 { 00:18:12.210 "name": "BaseBdev4", 00:18:12.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.210 "is_configured": false, 00:18:12.210 "data_offset": 0, 00:18:12.210 "data_size": 0 00:18:12.210 } 00:18:12.210 ] 00:18:12.210 }' 00:18:12.210 05:15:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.211 05:15:31 -- common/autotest_common.sh@10 -- # set +x 00:18:12.469 05:15:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:12.728 [2024-07-26 05:15:31.651468] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:12.728 [2024-07-26 05:15:31.651542] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:18:12.728 05:15:31 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:12.728 05:15:31 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:12.987 05:15:31 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:13.245 BaseBdev1 00:18:13.245 05:15:32 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:13.245 05:15:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:13.245 05:15:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:13.245 05:15:32 -- common/autotest_common.sh@889 -- # local i 00:18:13.245 05:15:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:13.245 05:15:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:13.245 05:15:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.504 05:15:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:13.762 [ 00:18:13.762 { 00:18:13.762 "name": "BaseBdev1", 00:18:13.762 "aliases": [ 00:18:13.762 "dba21f8b-87e1-47d3-ad5e-7b55bb039125" 00:18:13.762 ], 00:18:13.762 "product_name": "Malloc disk", 00:18:13.762 "block_size": 512, 00:18:13.762 "num_blocks": 65536, 00:18:13.762 "uuid": "dba21f8b-87e1-47d3-ad5e-7b55bb039125", 00:18:13.762 "assigned_rate_limits": { 00:18:13.762 "rw_ios_per_sec": 0, 00:18:13.762 "rw_mbytes_per_sec": 0, 00:18:13.762 "r_mbytes_per_sec": 0, 00:18:13.762 "w_mbytes_per_sec": 0 00:18:13.762 }, 00:18:13.762 "claimed": false, 00:18:13.762 "zoned": false, 00:18:13.762 "supported_io_types": { 00:18:13.762 "read": true, 00:18:13.762 "write": true, 00:18:13.762 "unmap": true, 00:18:13.762 "write_zeroes": true, 00:18:13.762 "flush": true, 00:18:13.762 "reset": true, 00:18:13.762 "compare": false, 00:18:13.762 "compare_and_write": false, 00:18:13.762 "abort": true, 00:18:13.762 "nvme_admin": false, 00:18:13.762 "nvme_io": false 00:18:13.762 }, 00:18:13.762 "memory_domains": [ 00:18:13.762 { 00:18:13.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.762 "dma_device_type": 2 00:18:13.762 } 00:18:13.762 ], 00:18:13.762 "driver_specific": {} 00:18:13.762 } 00:18:13.762 ] 00:18:13.762 05:15:32 -- common/autotest_common.sh@895 -- # return 0 00:18:13.762 05:15:32 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:13.762 [2024-07-26 05:15:32.870410] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.019 [2024-07-26 05:15:32.872702] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.019 [2024-07-26 05:15:32.872787] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.020 [2024-07-26 05:15:32.872803] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.020 [2024-07-26 05:15:32.872835] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.020 [2024-07-26 05:15:32.872844] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:14.020 [2024-07-26 05:15:32.872860] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.020 05:15:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.020 05:15:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.020 "name": "Existed_Raid", 00:18:14.020 "uuid": "3e087e9e-8bb6-4384-985c-a6651ffaea8b", 00:18:14.020 "strip_size_kb": 64, 00:18:14.020 "state": "configuring", 00:18:14.020 "raid_level": "raid0", 00:18:14.020 "superblock": true, 00:18:14.020 "num_base_bdevs": 4, 00:18:14.020 "num_base_bdevs_discovered": 1, 00:18:14.020 "num_base_bdevs_operational": 4, 00:18:14.020 "base_bdevs_list": [ 00:18:14.020 { 00:18:14.020 "name": "BaseBdev1", 00:18:14.020 "uuid": "dba21f8b-87e1-47d3-ad5e-7b55bb039125", 00:18:14.020 "is_configured": true, 00:18:14.020 "data_offset": 2048, 00:18:14.020 "data_size": 63488 00:18:14.020 }, 00:18:14.020 { 00:18:14.020 "name": "BaseBdev2", 00:18:14.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.020 "is_configured": false, 00:18:14.020 "data_offset": 0, 00:18:14.020 "data_size": 0 00:18:14.020 }, 00:18:14.020 { 00:18:14.020 "name": "BaseBdev3", 00:18:14.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.020 "is_configured": false, 00:18:14.020 "data_offset": 0, 00:18:14.020 "data_size": 0 00:18:14.020 }, 00:18:14.020 { 00:18:14.020 "name": "BaseBdev4", 00:18:14.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.020 "is_configured": false, 00:18:14.020 "data_offset": 0, 00:18:14.020 "data_size": 0 00:18:14.020 } 00:18:14.020 ] 00:18:14.020 }' 00:18:14.020 05:15:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.020 05:15:33 -- common/autotest_common.sh@10 -- # set +x 00:18:14.586 05:15:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:14.586 [2024-07-26 05:15:33.651042] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.586 BaseBdev2 00:18:14.586 05:15:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:14.586 05:15:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:14.586 05:15:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:14.586 05:15:33 -- common/autotest_common.sh@889 -- # local i 00:18:14.586 05:15:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:14.586 05:15:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:14.586 05:15:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:14.845 05:15:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:15.103 [ 00:18:15.103 { 00:18:15.103 "name": "BaseBdev2", 00:18:15.103 "aliases": [ 00:18:15.103 "96fe7d86-4beb-4c10-b204-b4093afe501a" 00:18:15.103 ], 00:18:15.103 "product_name": "Malloc disk", 00:18:15.103 "block_size": 512, 00:18:15.103 "num_blocks": 65536, 00:18:15.103 "uuid": "96fe7d86-4beb-4c10-b204-b4093afe501a", 00:18:15.103 "assigned_rate_limits": { 00:18:15.103 "rw_ios_per_sec": 0, 00:18:15.103 "rw_mbytes_per_sec": 0, 00:18:15.103 "r_mbytes_per_sec": 0, 00:18:15.103 "w_mbytes_per_sec": 0 00:18:15.103 }, 00:18:15.103 "claimed": true, 00:18:15.103 "claim_type": "exclusive_write", 00:18:15.103 "zoned": false, 00:18:15.103 "supported_io_types": { 00:18:15.103 "read": true, 00:18:15.103 "write": true, 00:18:15.103 "unmap": true, 00:18:15.103 "write_zeroes": true, 00:18:15.103 "flush": true, 00:18:15.103 "reset": true, 00:18:15.103 "compare": false, 00:18:15.103 "compare_and_write": false, 00:18:15.103 "abort": true, 00:18:15.103 "nvme_admin": false, 00:18:15.103 "nvme_io": false 00:18:15.103 }, 00:18:15.103 "memory_domains": [ 00:18:15.103 { 00:18:15.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.103 "dma_device_type": 2 00:18:15.103 } 00:18:15.103 ], 00:18:15.103 "driver_specific": {} 00:18:15.103 } 00:18:15.103 ] 00:18:15.103 05:15:34 -- common/autotest_common.sh@895 -- # return 0 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.103 05:15:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.362 05:15:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.362 "name": "Existed_Raid", 00:18:15.362 "uuid": "3e087e9e-8bb6-4384-985c-a6651ffaea8b", 00:18:15.362 "strip_size_kb": 64, 00:18:15.362 "state": "configuring", 00:18:15.362 "raid_level": "raid0", 00:18:15.362 "superblock": true, 00:18:15.362 "num_base_bdevs": 4, 00:18:15.362 "num_base_bdevs_discovered": 2, 00:18:15.362 "num_base_bdevs_operational": 4, 00:18:15.362 "base_bdevs_list": [ 00:18:15.362 { 00:18:15.362 "name": "BaseBdev1", 00:18:15.362 "uuid": "dba21f8b-87e1-47d3-ad5e-7b55bb039125", 00:18:15.362 "is_configured": true, 00:18:15.362 "data_offset": 2048, 00:18:15.362 "data_size": 63488 00:18:15.362 }, 00:18:15.362 { 00:18:15.362 "name": "BaseBdev2", 00:18:15.362 "uuid": "96fe7d86-4beb-4c10-b204-b4093afe501a", 00:18:15.362 "is_configured": true, 00:18:15.362 "data_offset": 2048, 00:18:15.362 "data_size": 63488 00:18:15.362 }, 00:18:15.362 { 00:18:15.362 "name": "BaseBdev3", 00:18:15.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.362 "is_configured": false, 00:18:15.362 "data_offset": 0, 00:18:15.362 "data_size": 0 00:18:15.362 }, 00:18:15.362 { 00:18:15.362 "name": "BaseBdev4", 00:18:15.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.362 "is_configured": false, 00:18:15.362 "data_offset": 0, 00:18:15.362 "data_size": 0 00:18:15.362 } 00:18:15.362 ] 00:18:15.362 }' 00:18:15.362 05:15:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.362 05:15:34 -- common/autotest_common.sh@10 -- # set +x 00:18:15.620 05:15:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:15.879 [2024-07-26 05:15:34.892613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:15.879 BaseBdev3 00:18:15.879 05:15:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:15.879 05:15:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:15.879 05:15:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:15.879 05:15:34 -- common/autotest_common.sh@889 -- # local i 00:18:15.879 05:15:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:15.879 05:15:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:15.879 05:15:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:16.137 05:15:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:16.395 [ 00:18:16.395 { 00:18:16.395 "name": "BaseBdev3", 00:18:16.395 "aliases": [ 00:18:16.395 "3b8f86df-a55e-48d9-b886-791bd0310786" 00:18:16.395 ], 00:18:16.395 "product_name": "Malloc disk", 00:18:16.395 "block_size": 512, 00:18:16.395 "num_blocks": 65536, 00:18:16.395 "uuid": "3b8f86df-a55e-48d9-b886-791bd0310786", 00:18:16.395 "assigned_rate_limits": { 00:18:16.395 "rw_ios_per_sec": 0, 00:18:16.395 "rw_mbytes_per_sec": 0, 00:18:16.395 "r_mbytes_per_sec": 0, 00:18:16.395 "w_mbytes_per_sec": 0 00:18:16.395 }, 00:18:16.395 "claimed": true, 00:18:16.395 "claim_type": "exclusive_write", 00:18:16.395 "zoned": false, 00:18:16.395 "supported_io_types": { 00:18:16.395 "read": true, 00:18:16.395 "write": true, 00:18:16.395 "unmap": true, 00:18:16.395 "write_zeroes": true, 00:18:16.395 "flush": true, 00:18:16.395 "reset": true, 00:18:16.395 "compare": false, 00:18:16.395 "compare_and_write": false, 00:18:16.395 "abort": true, 00:18:16.395 "nvme_admin": false, 00:18:16.395 "nvme_io": false 00:18:16.395 }, 00:18:16.395 "memory_domains": [ 00:18:16.395 { 00:18:16.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.395 "dma_device_type": 2 00:18:16.395 } 00:18:16.395 ], 00:18:16.395 "driver_specific": {} 00:18:16.395 } 00:18:16.395 ] 00:18:16.395 05:15:35 -- common/autotest_common.sh@895 -- # return 0 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.395 05:15:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.653 05:15:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:16.653 "name": "Existed_Raid", 00:18:16.653 "uuid": "3e087e9e-8bb6-4384-985c-a6651ffaea8b", 00:18:16.653 "strip_size_kb": 64, 00:18:16.653 "state": "configuring", 00:18:16.653 "raid_level": "raid0", 00:18:16.653 "superblock": true, 00:18:16.653 "num_base_bdevs": 4, 00:18:16.653 "num_base_bdevs_discovered": 3, 00:18:16.653 "num_base_bdevs_operational": 4, 00:18:16.653 "base_bdevs_list": [ 00:18:16.653 { 00:18:16.653 "name": "BaseBdev1", 00:18:16.653 "uuid": "dba21f8b-87e1-47d3-ad5e-7b55bb039125", 00:18:16.653 "is_configured": true, 00:18:16.653 "data_offset": 2048, 00:18:16.653 "data_size": 63488 00:18:16.653 }, 00:18:16.653 { 00:18:16.653 "name": "BaseBdev2", 00:18:16.653 "uuid": "96fe7d86-4beb-4c10-b204-b4093afe501a", 00:18:16.653 "is_configured": true, 00:18:16.653 "data_offset": 2048, 00:18:16.653 "data_size": 63488 00:18:16.653 }, 00:18:16.653 { 00:18:16.653 "name": "BaseBdev3", 00:18:16.653 "uuid": "3b8f86df-a55e-48d9-b886-791bd0310786", 00:18:16.653 "is_configured": true, 00:18:16.653 "data_offset": 2048, 00:18:16.653 "data_size": 63488 00:18:16.653 }, 00:18:16.653 { 00:18:16.653 "name": "BaseBdev4", 00:18:16.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.653 "is_configured": false, 00:18:16.653 "data_offset": 0, 00:18:16.653 "data_size": 0 00:18:16.653 } 00:18:16.654 ] 00:18:16.654 }' 00:18:16.654 05:15:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:16.654 05:15:35 -- common/autotest_common.sh@10 -- # set +x 00:18:16.912 05:15:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:17.170 [2024-07-26 05:15:36.138365] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:17.170 [2024-07-26 05:15:36.138778] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:18:17.170 [2024-07-26 05:15:36.138925] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:17.170 [2024-07-26 05:15:36.139192] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:17.170 BaseBdev4 00:18:17.170 [2024-07-26 05:15:36.139689] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:18:17.170 [2024-07-26 05:15:36.139713] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:18:17.170 [2024-07-26 05:15:36.139880] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.170 05:15:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:17.170 05:15:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:17.170 05:15:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:17.170 05:15:36 -- common/autotest_common.sh@889 -- # local i 00:18:17.170 05:15:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:17.170 05:15:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:17.170 05:15:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:17.431 05:15:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:17.695 [ 00:18:17.695 { 00:18:17.695 "name": "BaseBdev4", 00:18:17.695 "aliases": [ 00:18:17.695 "4fb28899-4fd6-412e-8d92-38b3daadcd74" 00:18:17.695 ], 00:18:17.695 "product_name": "Malloc disk", 00:18:17.695 "block_size": 512, 00:18:17.695 "num_blocks": 65536, 00:18:17.695 "uuid": "4fb28899-4fd6-412e-8d92-38b3daadcd74", 00:18:17.695 "assigned_rate_limits": { 00:18:17.695 "rw_ios_per_sec": 0, 00:18:17.695 "rw_mbytes_per_sec": 0, 00:18:17.695 "r_mbytes_per_sec": 0, 00:18:17.695 "w_mbytes_per_sec": 0 00:18:17.695 }, 00:18:17.695 "claimed": true, 00:18:17.695 "claim_type": "exclusive_write", 00:18:17.695 "zoned": false, 00:18:17.695 "supported_io_types": { 00:18:17.695 "read": true, 00:18:17.695 "write": true, 00:18:17.695 "unmap": true, 00:18:17.695 "write_zeroes": true, 00:18:17.695 "flush": true, 00:18:17.695 "reset": true, 00:18:17.695 "compare": false, 00:18:17.695 "compare_and_write": false, 00:18:17.695 "abort": true, 00:18:17.695 "nvme_admin": false, 00:18:17.695 "nvme_io": false 00:18:17.695 }, 00:18:17.695 "memory_domains": [ 00:18:17.695 { 00:18:17.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.695 "dma_device_type": 2 00:18:17.695 } 00:18:17.695 ], 00:18:17.695 "driver_specific": {} 00:18:17.695 } 00:18:17.695 ] 00:18:17.695 05:15:36 -- common/autotest_common.sh@895 -- # return 0 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.695 05:15:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.953 05:15:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.953 "name": "Existed_Raid", 00:18:17.953 "uuid": "3e087e9e-8bb6-4384-985c-a6651ffaea8b", 00:18:17.953 "strip_size_kb": 64, 00:18:17.953 "state": "online", 00:18:17.953 "raid_level": "raid0", 00:18:17.953 "superblock": true, 00:18:17.953 "num_base_bdevs": 4, 00:18:17.953 "num_base_bdevs_discovered": 4, 00:18:17.953 "num_base_bdevs_operational": 4, 00:18:17.953 "base_bdevs_list": [ 00:18:17.953 { 00:18:17.953 "name": "BaseBdev1", 00:18:17.953 "uuid": "dba21f8b-87e1-47d3-ad5e-7b55bb039125", 00:18:17.953 "is_configured": true, 00:18:17.953 "data_offset": 2048, 00:18:17.953 "data_size": 63488 00:18:17.953 }, 00:18:17.953 { 00:18:17.953 "name": "BaseBdev2", 00:18:17.953 "uuid": "96fe7d86-4beb-4c10-b204-b4093afe501a", 00:18:17.953 "is_configured": true, 00:18:17.953 "data_offset": 2048, 00:18:17.953 "data_size": 63488 00:18:17.953 }, 00:18:17.953 { 00:18:17.953 "name": "BaseBdev3", 00:18:17.953 "uuid": "3b8f86df-a55e-48d9-b886-791bd0310786", 00:18:17.953 "is_configured": true, 00:18:17.953 "data_offset": 2048, 00:18:17.953 "data_size": 63488 00:18:17.953 }, 00:18:17.953 { 00:18:17.953 "name": "BaseBdev4", 00:18:17.953 "uuid": "4fb28899-4fd6-412e-8d92-38b3daadcd74", 00:18:17.953 "is_configured": true, 00:18:17.953 "data_offset": 2048, 00:18:17.953 "data_size": 63488 00:18:17.953 } 00:18:17.953 ] 00:18:17.953 }' 00:18:17.953 05:15:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.953 05:15:36 -- common/autotest_common.sh@10 -- # set +x 00:18:18.211 05:15:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:18.469 [2024-07-26 05:15:37.378850] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:18.470 [2024-07-26 05:15:37.379052] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.470 [2024-07-26 05:15:37.379178] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.470 05:15:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.727 05:15:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.727 "name": "Existed_Raid", 00:18:18.727 "uuid": "3e087e9e-8bb6-4384-985c-a6651ffaea8b", 00:18:18.727 "strip_size_kb": 64, 00:18:18.727 "state": "offline", 00:18:18.727 "raid_level": "raid0", 00:18:18.727 "superblock": true, 00:18:18.727 "num_base_bdevs": 4, 00:18:18.727 "num_base_bdevs_discovered": 3, 00:18:18.727 "num_base_bdevs_operational": 3, 00:18:18.727 "base_bdevs_list": [ 00:18:18.727 { 00:18:18.727 "name": null, 00:18:18.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.728 "is_configured": false, 00:18:18.728 "data_offset": 2048, 00:18:18.728 "data_size": 63488 00:18:18.728 }, 00:18:18.728 { 00:18:18.728 "name": "BaseBdev2", 00:18:18.728 "uuid": "96fe7d86-4beb-4c10-b204-b4093afe501a", 00:18:18.728 "is_configured": true, 00:18:18.728 "data_offset": 2048, 00:18:18.728 "data_size": 63488 00:18:18.728 }, 00:18:18.728 { 00:18:18.728 "name": "BaseBdev3", 00:18:18.728 "uuid": "3b8f86df-a55e-48d9-b886-791bd0310786", 00:18:18.728 "is_configured": true, 00:18:18.728 "data_offset": 2048, 00:18:18.728 "data_size": 63488 00:18:18.728 }, 00:18:18.728 { 00:18:18.728 "name": "BaseBdev4", 00:18:18.728 "uuid": "4fb28899-4fd6-412e-8d92-38b3daadcd74", 00:18:18.728 "is_configured": true, 00:18:18.728 "data_offset": 2048, 00:18:18.728 "data_size": 63488 00:18:18.728 } 00:18:18.728 ] 00:18:18.728 }' 00:18:18.728 05:15:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.728 05:15:37 -- common/autotest_common.sh@10 -- # set +x 00:18:18.985 05:15:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:18.985 05:15:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:18.985 05:15:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.985 05:15:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:19.243 05:15:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:19.243 05:15:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:19.243 05:15:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:19.502 [2024-07-26 05:15:38.468548] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:19.502 05:15:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:19.502 05:15:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:19.502 05:15:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.502 05:15:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:19.760 05:15:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:19.760 05:15:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:19.760 05:15:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:20.019 [2024-07-26 05:15:38.994663] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:20.019 05:15:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:20.019 05:15:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:20.019 05:15:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.019 05:15:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:20.277 05:15:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:20.277 05:15:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:20.277 05:15:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:20.535 [2024-07-26 05:15:39.515953] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:20.535 [2024-07-26 05:15:39.516052] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:18:20.535 05:15:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:20.535 05:15:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:20.535 05:15:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.535 05:15:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:20.793 05:15:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:20.793 05:15:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:20.793 05:15:39 -- bdev/bdev_raid.sh@287 -- # killprocess 74910 00:18:20.793 05:15:39 -- common/autotest_common.sh@926 -- # '[' -z 74910 ']' 00:18:20.793 05:15:39 -- common/autotest_common.sh@930 -- # kill -0 74910 00:18:20.793 05:15:39 -- common/autotest_common.sh@931 -- # uname 00:18:20.793 05:15:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:20.793 05:15:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74910 00:18:20.793 killing process with pid 74910 00:18:20.793 05:15:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:20.793 05:15:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:20.793 05:15:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74910' 00:18:20.793 05:15:39 -- common/autotest_common.sh@945 -- # kill 74910 00:18:20.793 05:15:39 -- common/autotest_common.sh@950 -- # wait 74910 00:18:20.793 [2024-07-26 05:15:39.870750] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.793 [2024-07-26 05:15:39.870945] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.167 ************************************ 00:18:22.167 END TEST raid_state_function_test_sb 00:18:22.167 ************************************ 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:22.167 00:18:22.167 real 0m12.969s 00:18:22.167 user 0m21.711s 00:18:22.167 sys 0m1.862s 00:18:22.167 05:15:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.167 05:15:41 -- common/autotest_common.sh@10 -- # set +x 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:22.167 05:15:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:22.167 05:15:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:22.167 05:15:41 -- common/autotest_common.sh@10 -- # set +x 00:18:22.167 ************************************ 00:18:22.167 START TEST raid_superblock_test 00:18:22.167 ************************************ 00:18:22.167 05:15:41 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:22.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@357 -- # raid_pid=75318 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:22.167 05:15:41 -- bdev/bdev_raid.sh@358 -- # waitforlisten 75318 /var/tmp/spdk-raid.sock 00:18:22.167 05:15:41 -- common/autotest_common.sh@819 -- # '[' -z 75318 ']' 00:18:22.167 05:15:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:22.167 05:15:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:22.167 05:15:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:22.167 05:15:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:22.167 05:15:41 -- common/autotest_common.sh@10 -- # set +x 00:18:22.167 [2024-07-26 05:15:41.168716] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:22.167 [2024-07-26 05:15:41.168911] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75318 ] 00:18:22.425 [2024-07-26 05:15:41.337968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.683 [2024-07-26 05:15:41.559423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.683 [2024-07-26 05:15:41.739134] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.248 05:15:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:23.248 05:15:42 -- common/autotest_common.sh@852 -- # return 0 00:18:23.248 05:15:42 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:23.248 05:15:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:23.248 05:15:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:23.248 05:15:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:23.248 05:15:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:23.248 05:15:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:23.248 05:15:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:23.248 05:15:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:23.248 05:15:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:23.248 malloc1 00:18:23.248 05:15:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:23.506 [2024-07-26 05:15:42.561882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:23.506 [2024-07-26 05:15:42.562187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.506 [2024-07-26 05:15:42.562243] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:18:23.506 [2024-07-26 05:15:42.562260] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.506 [2024-07-26 05:15:42.564743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.506 [2024-07-26 05:15:42.564789] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:23.506 pt1 00:18:23.506 05:15:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:23.506 05:15:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:23.506 05:15:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:23.506 05:15:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:23.506 05:15:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:23.506 05:15:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:23.506 05:15:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:23.506 05:15:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:23.506 05:15:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:23.765 malloc2 00:18:23.765 05:15:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:24.023 [2024-07-26 05:15:43.032266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:24.023 [2024-07-26 05:15:43.032346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.023 [2024-07-26 05:15:43.032383] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:18:24.023 [2024-07-26 05:15:43.032399] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.023 [2024-07-26 05:15:43.034867] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.023 [2024-07-26 05:15:43.034914] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:24.023 pt2 00:18:24.023 05:15:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:24.023 05:15:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:24.023 05:15:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:24.023 05:15:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:24.023 05:15:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:24.023 05:15:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:24.023 05:15:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:24.023 05:15:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:24.023 05:15:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:24.282 malloc3 00:18:24.282 05:15:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:24.540 [2024-07-26 05:15:43.527335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:24.540 [2024-07-26 05:15:43.527410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.540 [2024-07-26 05:15:43.527446] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:18:24.540 [2024-07-26 05:15:43.527462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.540 [2024-07-26 05:15:43.529876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.540 [2024-07-26 05:15:43.529922] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:24.540 pt3 00:18:24.540 05:15:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:24.540 05:15:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:24.540 05:15:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:24.540 05:15:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:24.540 05:15:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:24.540 05:15:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:24.540 05:15:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:24.540 05:15:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:24.540 05:15:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:24.798 malloc4 00:18:24.798 05:15:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:25.056 [2024-07-26 05:15:44.014413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:25.056 [2024-07-26 05:15:44.014651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.056 [2024-07-26 05:15:44.014706] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:18:25.056 [2024-07-26 05:15:44.014724] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.056 [2024-07-26 05:15:44.017189] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.056 [2024-07-26 05:15:44.017235] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:25.056 pt4 00:18:25.056 05:15:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:25.056 05:15:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:25.056 05:15:44 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:25.315 [2024-07-26 05:15:44.234539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:25.315 [2024-07-26 05:15:44.236755] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:25.315 [2024-07-26 05:15:44.236869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:25.315 [2024-07-26 05:15:44.236944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:25.315 [2024-07-26 05:15:44.237208] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:18:25.315 [2024-07-26 05:15:44.237229] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:25.315 [2024-07-26 05:15:44.237367] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:18:25.315 [2024-07-26 05:15:44.237766] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:18:25.315 [2024-07-26 05:15:44.237791] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:18:25.315 [2024-07-26 05:15:44.237954] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.315 05:15:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.573 05:15:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.573 "name": "raid_bdev1", 00:18:25.573 "uuid": "bf1ef98b-164a-430f-984c-28b02162f733", 00:18:25.573 "strip_size_kb": 64, 00:18:25.573 "state": "online", 00:18:25.573 "raid_level": "raid0", 00:18:25.573 "superblock": true, 00:18:25.573 "num_base_bdevs": 4, 00:18:25.573 "num_base_bdevs_discovered": 4, 00:18:25.573 "num_base_bdevs_operational": 4, 00:18:25.573 "base_bdevs_list": [ 00:18:25.573 { 00:18:25.573 "name": "pt1", 00:18:25.573 "uuid": "20cc0bf5-accf-5499-a166-d88a55fc7696", 00:18:25.573 "is_configured": true, 00:18:25.573 "data_offset": 2048, 00:18:25.573 "data_size": 63488 00:18:25.573 }, 00:18:25.573 { 00:18:25.573 "name": "pt2", 00:18:25.573 "uuid": "2acc7f69-c130-5a6b-9196-0a11783dc8f3", 00:18:25.573 "is_configured": true, 00:18:25.573 "data_offset": 2048, 00:18:25.573 "data_size": 63488 00:18:25.573 }, 00:18:25.573 { 00:18:25.573 "name": "pt3", 00:18:25.573 "uuid": "49a4ae5a-38af-5075-a039-182d5c488921", 00:18:25.573 "is_configured": true, 00:18:25.573 "data_offset": 2048, 00:18:25.573 "data_size": 63488 00:18:25.573 }, 00:18:25.573 { 00:18:25.573 "name": "pt4", 00:18:25.573 "uuid": "5397b220-6598-598d-a19b-02e2a0721b2c", 00:18:25.573 "is_configured": true, 00:18:25.573 "data_offset": 2048, 00:18:25.573 "data_size": 63488 00:18:25.573 } 00:18:25.573 ] 00:18:25.573 }' 00:18:25.573 05:15:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.573 05:15:44 -- common/autotest_common.sh@10 -- # set +x 00:18:25.832 05:15:44 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:25.832 05:15:44 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:26.090 [2024-07-26 05:15:45.038943] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.090 05:15:45 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=bf1ef98b-164a-430f-984c-28b02162f733 00:18:26.090 05:15:45 -- bdev/bdev_raid.sh@380 -- # '[' -z bf1ef98b-164a-430f-984c-28b02162f733 ']' 00:18:26.090 05:15:45 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:26.349 [2024-07-26 05:15:45.238747] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.349 [2024-07-26 05:15:45.239105] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.349 [2024-07-26 05:15:45.239231] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.349 [2024-07-26 05:15:45.239315] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.349 [2024-07-26 05:15:45.239331] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:18:26.349 05:15:45 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:26.349 05:15:45 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.608 05:15:45 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:26.608 05:15:45 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:26.608 05:15:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:26.608 05:15:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:26.608 05:15:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:26.608 05:15:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:26.867 05:15:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:26.867 05:15:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:27.125 05:15:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:27.125 05:15:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:27.384 05:15:46 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:27.384 05:15:46 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:27.642 05:15:46 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:27.642 05:15:46 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:27.642 05:15:46 -- common/autotest_common.sh@640 -- # local es=0 00:18:27.642 05:15:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:27.642 05:15:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.642 05:15:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:27.642 05:15:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.642 05:15:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:27.642 05:15:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.642 05:15:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:27.642 05:15:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.642 05:15:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:27.642 05:15:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:27.901 [2024-07-26 05:15:46.779114] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:27.901 [2024-07-26 05:15:46.781281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:27.901 [2024-07-26 05:15:46.781346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:27.901 [2024-07-26 05:15:46.781407] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:27.901 [2024-07-26 05:15:46.781468] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:27.901 [2024-07-26 05:15:46.781543] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:27.901 [2024-07-26 05:15:46.781576] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:27.901 [2024-07-26 05:15:46.781601] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:27.901 [2024-07-26 05:15:46.781623] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.901 [2024-07-26 05:15:46.781637] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:18:27.901 request: 00:18:27.901 { 00:18:27.901 "name": "raid_bdev1", 00:18:27.901 "raid_level": "raid0", 00:18:27.901 "base_bdevs": [ 00:18:27.901 "malloc1", 00:18:27.901 "malloc2", 00:18:27.901 "malloc3", 00:18:27.901 "malloc4" 00:18:27.901 ], 00:18:27.901 "superblock": false, 00:18:27.901 "strip_size_kb": 64, 00:18:27.901 "method": "bdev_raid_create", 00:18:27.901 "req_id": 1 00:18:27.901 } 00:18:27.901 Got JSON-RPC error response 00:18:27.901 response: 00:18:27.901 { 00:18:27.901 "code": -17, 00:18:27.901 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:27.901 } 00:18:27.901 05:15:46 -- common/autotest_common.sh@643 -- # es=1 00:18:27.901 05:15:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:27.901 05:15:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:27.901 05:15:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:27.901 05:15:46 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.901 05:15:46 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:28.159 [2024-07-26 05:15:47.239192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:28.159 [2024-07-26 05:15:47.239294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.159 [2024-07-26 05:15:47.239341] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:18:28.159 [2024-07-26 05:15:47.239354] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.159 [2024-07-26 05:15:47.241744] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.159 [2024-07-26 05:15:47.241787] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:28.159 [2024-07-26 05:15:47.241935] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:28.159 [2024-07-26 05:15:47.241999] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:28.159 pt1 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.159 05:15:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.439 05:15:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.439 "name": "raid_bdev1", 00:18:28.439 "uuid": "bf1ef98b-164a-430f-984c-28b02162f733", 00:18:28.439 "strip_size_kb": 64, 00:18:28.439 "state": "configuring", 00:18:28.439 "raid_level": "raid0", 00:18:28.439 "superblock": true, 00:18:28.439 "num_base_bdevs": 4, 00:18:28.439 "num_base_bdevs_discovered": 1, 00:18:28.439 "num_base_bdevs_operational": 4, 00:18:28.439 "base_bdevs_list": [ 00:18:28.439 { 00:18:28.439 "name": "pt1", 00:18:28.439 "uuid": "20cc0bf5-accf-5499-a166-d88a55fc7696", 00:18:28.439 "is_configured": true, 00:18:28.439 "data_offset": 2048, 00:18:28.439 "data_size": 63488 00:18:28.439 }, 00:18:28.439 { 00:18:28.439 "name": null, 00:18:28.439 "uuid": "2acc7f69-c130-5a6b-9196-0a11783dc8f3", 00:18:28.439 "is_configured": false, 00:18:28.439 "data_offset": 2048, 00:18:28.439 "data_size": 63488 00:18:28.439 }, 00:18:28.439 { 00:18:28.439 "name": null, 00:18:28.439 "uuid": "49a4ae5a-38af-5075-a039-182d5c488921", 00:18:28.439 "is_configured": false, 00:18:28.439 "data_offset": 2048, 00:18:28.439 "data_size": 63488 00:18:28.439 }, 00:18:28.439 { 00:18:28.439 "name": null, 00:18:28.439 "uuid": "5397b220-6598-598d-a19b-02e2a0721b2c", 00:18:28.439 "is_configured": false, 00:18:28.439 "data_offset": 2048, 00:18:28.439 "data_size": 63488 00:18:28.439 } 00:18:28.439 ] 00:18:28.439 }' 00:18:28.439 05:15:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.439 05:15:47 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 05:15:47 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:28.710 05:15:47 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.969 [2024-07-26 05:15:47.967440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.969 [2024-07-26 05:15:47.967545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.969 [2024-07-26 05:15:47.967586] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:18:28.969 [2024-07-26 05:15:47.967602] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.969 [2024-07-26 05:15:47.968103] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.969 [2024-07-26 05:15:47.968134] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.969 [2024-07-26 05:15:47.968241] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:28.969 [2024-07-26 05:15:47.968270] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.969 pt2 00:18:28.969 05:15:47 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:29.227 [2024-07-26 05:15:48.215530] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.227 05:15:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.486 05:15:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.486 "name": "raid_bdev1", 00:18:29.486 "uuid": "bf1ef98b-164a-430f-984c-28b02162f733", 00:18:29.486 "strip_size_kb": 64, 00:18:29.486 "state": "configuring", 00:18:29.486 "raid_level": "raid0", 00:18:29.486 "superblock": true, 00:18:29.486 "num_base_bdevs": 4, 00:18:29.486 "num_base_bdevs_discovered": 1, 00:18:29.486 "num_base_bdevs_operational": 4, 00:18:29.486 "base_bdevs_list": [ 00:18:29.486 { 00:18:29.486 "name": "pt1", 00:18:29.486 "uuid": "20cc0bf5-accf-5499-a166-d88a55fc7696", 00:18:29.486 "is_configured": true, 00:18:29.486 "data_offset": 2048, 00:18:29.486 "data_size": 63488 00:18:29.486 }, 00:18:29.486 { 00:18:29.486 "name": null, 00:18:29.486 "uuid": "2acc7f69-c130-5a6b-9196-0a11783dc8f3", 00:18:29.486 "is_configured": false, 00:18:29.486 "data_offset": 2048, 00:18:29.486 "data_size": 63488 00:18:29.486 }, 00:18:29.486 { 00:18:29.486 "name": null, 00:18:29.486 "uuid": "49a4ae5a-38af-5075-a039-182d5c488921", 00:18:29.486 "is_configured": false, 00:18:29.486 "data_offset": 2048, 00:18:29.486 "data_size": 63488 00:18:29.486 }, 00:18:29.486 { 00:18:29.486 "name": null, 00:18:29.486 "uuid": "5397b220-6598-598d-a19b-02e2a0721b2c", 00:18:29.486 "is_configured": false, 00:18:29.486 "data_offset": 2048, 00:18:29.486 "data_size": 63488 00:18:29.486 } 00:18:29.486 ] 00:18:29.486 }' 00:18:29.486 05:15:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.486 05:15:48 -- common/autotest_common.sh@10 -- # set +x 00:18:29.744 05:15:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:29.744 05:15:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:29.744 05:15:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.002 [2024-07-26 05:15:49.015754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.002 [2024-07-26 05:15:49.015825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.002 [2024-07-26 05:15:49.015854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:18:30.002 [2024-07-26 05:15:49.015869] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.002 [2024-07-26 05:15:49.016406] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.002 [2024-07-26 05:15:49.016443] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.002 [2024-07-26 05:15:49.016544] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:30.002 [2024-07-26 05:15:49.016581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.002 pt2 00:18:30.002 05:15:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:30.002 05:15:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:30.002 05:15:49 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:30.260 [2024-07-26 05:15:49.239825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:30.260 [2024-07-26 05:15:49.239945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.260 [2024-07-26 05:15:49.239975] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:18:30.260 [2024-07-26 05:15:49.239992] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.260 [2024-07-26 05:15:49.240491] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.260 [2024-07-26 05:15:49.240538] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:30.260 [2024-07-26 05:15:49.240637] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:30.260 [2024-07-26 05:15:49.240678] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:30.260 pt3 00:18:30.261 05:15:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:30.261 05:15:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:30.261 05:15:49 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:30.519 [2024-07-26 05:15:49.459859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:30.519 [2024-07-26 05:15:49.459968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.519 [2024-07-26 05:15:49.459997] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:18:30.519 [2024-07-26 05:15:49.460013] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.519 [2024-07-26 05:15:49.460516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.519 [2024-07-26 05:15:49.460592] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:30.519 [2024-07-26 05:15:49.460684] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:30.519 [2024-07-26 05:15:49.460735] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:30.519 [2024-07-26 05:15:49.460882] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:18:30.519 [2024-07-26 05:15:49.460912] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:30.519 [2024-07-26 05:15:49.461033] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:30.519 [2024-07-26 05:15:49.461445] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:18:30.519 [2024-07-26 05:15:49.461472] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:18:30.519 [2024-07-26 05:15:49.461666] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.519 pt4 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.519 05:15:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.777 05:15:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.777 "name": "raid_bdev1", 00:18:30.777 "uuid": "bf1ef98b-164a-430f-984c-28b02162f733", 00:18:30.777 "strip_size_kb": 64, 00:18:30.777 "state": "online", 00:18:30.777 "raid_level": "raid0", 00:18:30.777 "superblock": true, 00:18:30.777 "num_base_bdevs": 4, 00:18:30.777 "num_base_bdevs_discovered": 4, 00:18:30.777 "num_base_bdevs_operational": 4, 00:18:30.777 "base_bdevs_list": [ 00:18:30.777 { 00:18:30.777 "name": "pt1", 00:18:30.777 "uuid": "20cc0bf5-accf-5499-a166-d88a55fc7696", 00:18:30.777 "is_configured": true, 00:18:30.777 "data_offset": 2048, 00:18:30.777 "data_size": 63488 00:18:30.777 }, 00:18:30.777 { 00:18:30.777 "name": "pt2", 00:18:30.777 "uuid": "2acc7f69-c130-5a6b-9196-0a11783dc8f3", 00:18:30.777 "is_configured": true, 00:18:30.777 "data_offset": 2048, 00:18:30.777 "data_size": 63488 00:18:30.777 }, 00:18:30.777 { 00:18:30.777 "name": "pt3", 00:18:30.777 "uuid": "49a4ae5a-38af-5075-a039-182d5c488921", 00:18:30.777 "is_configured": true, 00:18:30.777 "data_offset": 2048, 00:18:30.777 "data_size": 63488 00:18:30.777 }, 00:18:30.777 { 00:18:30.777 "name": "pt4", 00:18:30.777 "uuid": "5397b220-6598-598d-a19b-02e2a0721b2c", 00:18:30.777 "is_configured": true, 00:18:30.777 "data_offset": 2048, 00:18:30.777 "data_size": 63488 00:18:30.777 } 00:18:30.777 ] 00:18:30.777 }' 00:18:30.777 05:15:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.777 05:15:49 -- common/autotest_common.sh@10 -- # set +x 00:18:31.035 05:15:50 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:31.035 05:15:50 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:31.294 [2024-07-26 05:15:50.300503] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.294 05:15:50 -- bdev/bdev_raid.sh@430 -- # '[' bf1ef98b-164a-430f-984c-28b02162f733 '!=' bf1ef98b-164a-430f-984c-28b02162f733 ']' 00:18:31.294 05:15:50 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:31.294 05:15:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:31.294 05:15:50 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:31.294 05:15:50 -- bdev/bdev_raid.sh@511 -- # killprocess 75318 00:18:31.294 05:15:50 -- common/autotest_common.sh@926 -- # '[' -z 75318 ']' 00:18:31.294 05:15:50 -- common/autotest_common.sh@930 -- # kill -0 75318 00:18:31.294 05:15:50 -- common/autotest_common.sh@931 -- # uname 00:18:31.294 05:15:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:31.294 05:15:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75318 00:18:31.294 killing process with pid 75318 00:18:31.294 05:15:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:31.294 05:15:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:31.294 05:15:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75318' 00:18:31.294 05:15:50 -- common/autotest_common.sh@945 -- # kill 75318 00:18:31.294 [2024-07-26 05:15:50.358750] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.294 [2024-07-26 05:15:50.358825] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.294 05:15:50 -- common/autotest_common.sh@950 -- # wait 75318 00:18:31.294 [2024-07-26 05:15:50.358918] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.294 [2024-07-26 05:15:50.358932] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:18:31.861 [2024-07-26 05:15:50.667425] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.797 ************************************ 00:18:32.797 END TEST raid_superblock_test 00:18:32.797 ************************************ 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:32.797 00:18:32.797 real 0m10.631s 00:18:32.797 user 0m17.652s 00:18:32.797 sys 0m1.428s 00:18:32.797 05:15:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.797 05:15:51 -- common/autotest_common.sh@10 -- # set +x 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:32.797 05:15:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:32.797 05:15:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:32.797 05:15:51 -- common/autotest_common.sh@10 -- # set +x 00:18:32.797 ************************************ 00:18:32.797 START TEST raid_state_function_test 00:18:32.797 ************************************ 00:18:32.797 05:15:51 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=75609 00:18:32.797 Process raid pid: 75609 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 75609' 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 75609 /var/tmp/spdk-raid.sock 00:18:32.797 05:15:51 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:32.797 05:15:51 -- common/autotest_common.sh@819 -- # '[' -z 75609 ']' 00:18:32.797 05:15:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:32.797 05:15:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:32.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:32.797 05:15:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:32.797 05:15:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:32.797 05:15:51 -- common/autotest_common.sh@10 -- # set +x 00:18:32.797 [2024-07-26 05:15:51.856569] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:32.797 [2024-07-26 05:15:51.856747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.056 [2024-07-26 05:15:52.025797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.315 [2024-07-26 05:15:52.204714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.315 [2024-07-26 05:15:52.375149] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.882 05:15:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:33.882 05:15:52 -- common/autotest_common.sh@852 -- # return 0 00:18:33.882 05:15:52 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:33.882 [2024-07-26 05:15:52.974718] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:33.882 [2024-07-26 05:15:52.974791] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:33.882 [2024-07-26 05:15:52.974820] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.882 [2024-07-26 05:15:52.974834] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.882 [2024-07-26 05:15:52.974842] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:33.882 [2024-07-26 05:15:52.974855] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:33.882 [2024-07-26 05:15:52.974862] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:33.882 [2024-07-26 05:15:52.974874] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.141 05:15:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.141 05:15:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.141 "name": "Existed_Raid", 00:18:34.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.141 "strip_size_kb": 64, 00:18:34.141 "state": "configuring", 00:18:34.141 "raid_level": "concat", 00:18:34.141 "superblock": false, 00:18:34.141 "num_base_bdevs": 4, 00:18:34.141 "num_base_bdevs_discovered": 0, 00:18:34.141 "num_base_bdevs_operational": 4, 00:18:34.141 "base_bdevs_list": [ 00:18:34.141 { 00:18:34.141 "name": "BaseBdev1", 00:18:34.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.141 "is_configured": false, 00:18:34.141 "data_offset": 0, 00:18:34.141 "data_size": 0 00:18:34.141 }, 00:18:34.141 { 00:18:34.141 "name": "BaseBdev2", 00:18:34.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.141 "is_configured": false, 00:18:34.141 "data_offset": 0, 00:18:34.141 "data_size": 0 00:18:34.141 }, 00:18:34.141 { 00:18:34.141 "name": "BaseBdev3", 00:18:34.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.141 "is_configured": false, 00:18:34.141 "data_offset": 0, 00:18:34.141 "data_size": 0 00:18:34.141 }, 00:18:34.141 { 00:18:34.141 "name": "BaseBdev4", 00:18:34.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.141 "is_configured": false, 00:18:34.141 "data_offset": 0, 00:18:34.141 "data_size": 0 00:18:34.141 } 00:18:34.141 ] 00:18:34.141 }' 00:18:34.141 05:15:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.141 05:15:53 -- common/autotest_common.sh@10 -- # set +x 00:18:34.737 05:15:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:34.737 [2024-07-26 05:15:53.746842] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:34.737 [2024-07-26 05:15:53.746890] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:18:34.737 05:15:53 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:34.995 [2024-07-26 05:15:54.010969] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:34.995 [2024-07-26 05:15:54.011069] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:34.995 [2024-07-26 05:15:54.011083] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:34.995 [2024-07-26 05:15:54.011097] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:34.995 [2024-07-26 05:15:54.011106] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:34.995 [2024-07-26 05:15:54.011118] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:34.995 [2024-07-26 05:15:54.011126] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:34.995 [2024-07-26 05:15:54.011138] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:34.995 05:15:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:35.253 [2024-07-26 05:15:54.280045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.253 BaseBdev1 00:18:35.253 05:15:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:35.253 05:15:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:35.253 05:15:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:35.253 05:15:54 -- common/autotest_common.sh@889 -- # local i 00:18:35.253 05:15:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:35.253 05:15:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:35.253 05:15:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:35.511 05:15:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:35.770 [ 00:18:35.770 { 00:18:35.770 "name": "BaseBdev1", 00:18:35.770 "aliases": [ 00:18:35.770 "fc120dd1-9d75-4eaf-ab64-91569510c5f7" 00:18:35.770 ], 00:18:35.770 "product_name": "Malloc disk", 00:18:35.770 "block_size": 512, 00:18:35.770 "num_blocks": 65536, 00:18:35.770 "uuid": "fc120dd1-9d75-4eaf-ab64-91569510c5f7", 00:18:35.770 "assigned_rate_limits": { 00:18:35.770 "rw_ios_per_sec": 0, 00:18:35.770 "rw_mbytes_per_sec": 0, 00:18:35.770 "r_mbytes_per_sec": 0, 00:18:35.770 "w_mbytes_per_sec": 0 00:18:35.770 }, 00:18:35.770 "claimed": true, 00:18:35.770 "claim_type": "exclusive_write", 00:18:35.770 "zoned": false, 00:18:35.770 "supported_io_types": { 00:18:35.770 "read": true, 00:18:35.770 "write": true, 00:18:35.770 "unmap": true, 00:18:35.770 "write_zeroes": true, 00:18:35.770 "flush": true, 00:18:35.770 "reset": true, 00:18:35.770 "compare": false, 00:18:35.770 "compare_and_write": false, 00:18:35.770 "abort": true, 00:18:35.770 "nvme_admin": false, 00:18:35.770 "nvme_io": false 00:18:35.770 }, 00:18:35.770 "memory_domains": [ 00:18:35.770 { 00:18:35.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.770 "dma_device_type": 2 00:18:35.770 } 00:18:35.770 ], 00:18:35.770 "driver_specific": {} 00:18:35.770 } 00:18:35.770 ] 00:18:35.770 05:15:54 -- common/autotest_common.sh@895 -- # return 0 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.770 05:15:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.028 05:15:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.028 "name": "Existed_Raid", 00:18:36.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.028 "strip_size_kb": 64, 00:18:36.028 "state": "configuring", 00:18:36.028 "raid_level": "concat", 00:18:36.028 "superblock": false, 00:18:36.028 "num_base_bdevs": 4, 00:18:36.028 "num_base_bdevs_discovered": 1, 00:18:36.028 "num_base_bdevs_operational": 4, 00:18:36.028 "base_bdevs_list": [ 00:18:36.028 { 00:18:36.028 "name": "BaseBdev1", 00:18:36.028 "uuid": "fc120dd1-9d75-4eaf-ab64-91569510c5f7", 00:18:36.028 "is_configured": true, 00:18:36.028 "data_offset": 0, 00:18:36.028 "data_size": 65536 00:18:36.028 }, 00:18:36.028 { 00:18:36.028 "name": "BaseBdev2", 00:18:36.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.028 "is_configured": false, 00:18:36.028 "data_offset": 0, 00:18:36.028 "data_size": 0 00:18:36.028 }, 00:18:36.028 { 00:18:36.028 "name": "BaseBdev3", 00:18:36.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.028 "is_configured": false, 00:18:36.028 "data_offset": 0, 00:18:36.028 "data_size": 0 00:18:36.028 }, 00:18:36.028 { 00:18:36.028 "name": "BaseBdev4", 00:18:36.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.028 "is_configured": false, 00:18:36.029 "data_offset": 0, 00:18:36.029 "data_size": 0 00:18:36.029 } 00:18:36.029 ] 00:18:36.029 }' 00:18:36.029 05:15:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.029 05:15:54 -- common/autotest_common.sh@10 -- # set +x 00:18:36.287 05:15:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:36.545 [2024-07-26 05:15:55.472493] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:36.545 [2024-07-26 05:15:55.472569] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:18:36.545 05:15:55 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:36.545 05:15:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:36.803 [2024-07-26 05:15:55.680608] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.803 [2024-07-26 05:15:55.682785] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:36.803 [2024-07-26 05:15:55.682868] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:36.803 [2024-07-26 05:15:55.682897] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:36.803 [2024-07-26 05:15:55.682926] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:36.803 [2024-07-26 05:15:55.682940] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:36.803 [2024-07-26 05:15:55.682954] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.803 05:15:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.157 05:15:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.157 "name": "Existed_Raid", 00:18:37.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.157 "strip_size_kb": 64, 00:18:37.157 "state": "configuring", 00:18:37.157 "raid_level": "concat", 00:18:37.157 "superblock": false, 00:18:37.157 "num_base_bdevs": 4, 00:18:37.157 "num_base_bdevs_discovered": 1, 00:18:37.157 "num_base_bdevs_operational": 4, 00:18:37.157 "base_bdevs_list": [ 00:18:37.157 { 00:18:37.157 "name": "BaseBdev1", 00:18:37.157 "uuid": "fc120dd1-9d75-4eaf-ab64-91569510c5f7", 00:18:37.157 "is_configured": true, 00:18:37.157 "data_offset": 0, 00:18:37.157 "data_size": 65536 00:18:37.157 }, 00:18:37.157 { 00:18:37.157 "name": "BaseBdev2", 00:18:37.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.157 "is_configured": false, 00:18:37.157 "data_offset": 0, 00:18:37.157 "data_size": 0 00:18:37.157 }, 00:18:37.157 { 00:18:37.157 "name": "BaseBdev3", 00:18:37.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.157 "is_configured": false, 00:18:37.157 "data_offset": 0, 00:18:37.157 "data_size": 0 00:18:37.157 }, 00:18:37.157 { 00:18:37.157 "name": "BaseBdev4", 00:18:37.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.157 "is_configured": false, 00:18:37.157 "data_offset": 0, 00:18:37.157 "data_size": 0 00:18:37.157 } 00:18:37.157 ] 00:18:37.157 }' 00:18:37.157 05:15:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.157 05:15:55 -- common/autotest_common.sh@10 -- # set +x 00:18:37.157 05:15:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:37.448 [2024-07-26 05:15:56.523607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.448 BaseBdev2 00:18:37.448 05:15:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:37.448 05:15:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:37.448 05:15:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:37.448 05:15:56 -- common/autotest_common.sh@889 -- # local i 00:18:37.448 05:15:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:37.448 05:15:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:37.448 05:15:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.719 05:15:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:37.978 [ 00:18:37.978 { 00:18:37.978 "name": "BaseBdev2", 00:18:37.978 "aliases": [ 00:18:37.978 "df0ab621-ede4-4f4f-9a79-273def1076f6" 00:18:37.978 ], 00:18:37.978 "product_name": "Malloc disk", 00:18:37.978 "block_size": 512, 00:18:37.978 "num_blocks": 65536, 00:18:37.978 "uuid": "df0ab621-ede4-4f4f-9a79-273def1076f6", 00:18:37.978 "assigned_rate_limits": { 00:18:37.978 "rw_ios_per_sec": 0, 00:18:37.978 "rw_mbytes_per_sec": 0, 00:18:37.978 "r_mbytes_per_sec": 0, 00:18:37.978 "w_mbytes_per_sec": 0 00:18:37.978 }, 00:18:37.978 "claimed": true, 00:18:37.978 "claim_type": "exclusive_write", 00:18:37.978 "zoned": false, 00:18:37.978 "supported_io_types": { 00:18:37.978 "read": true, 00:18:37.978 "write": true, 00:18:37.978 "unmap": true, 00:18:37.978 "write_zeroes": true, 00:18:37.978 "flush": true, 00:18:37.978 "reset": true, 00:18:37.978 "compare": false, 00:18:37.978 "compare_and_write": false, 00:18:37.978 "abort": true, 00:18:37.978 "nvme_admin": false, 00:18:37.978 "nvme_io": false 00:18:37.978 }, 00:18:37.978 "memory_domains": [ 00:18:37.978 { 00:18:37.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.978 "dma_device_type": 2 00:18:37.978 } 00:18:37.978 ], 00:18:37.978 "driver_specific": {} 00:18:37.978 } 00:18:37.978 ] 00:18:37.978 05:15:56 -- common/autotest_common.sh@895 -- # return 0 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.978 05:15:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.237 05:15:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:38.237 "name": "Existed_Raid", 00:18:38.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.237 "strip_size_kb": 64, 00:18:38.237 "state": "configuring", 00:18:38.237 "raid_level": "concat", 00:18:38.237 "superblock": false, 00:18:38.237 "num_base_bdevs": 4, 00:18:38.237 "num_base_bdevs_discovered": 2, 00:18:38.237 "num_base_bdevs_operational": 4, 00:18:38.237 "base_bdevs_list": [ 00:18:38.237 { 00:18:38.237 "name": "BaseBdev1", 00:18:38.237 "uuid": "fc120dd1-9d75-4eaf-ab64-91569510c5f7", 00:18:38.237 "is_configured": true, 00:18:38.237 "data_offset": 0, 00:18:38.237 "data_size": 65536 00:18:38.237 }, 00:18:38.237 { 00:18:38.237 "name": "BaseBdev2", 00:18:38.237 "uuid": "df0ab621-ede4-4f4f-9a79-273def1076f6", 00:18:38.237 "is_configured": true, 00:18:38.237 "data_offset": 0, 00:18:38.237 "data_size": 65536 00:18:38.237 }, 00:18:38.237 { 00:18:38.237 "name": "BaseBdev3", 00:18:38.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.237 "is_configured": false, 00:18:38.237 "data_offset": 0, 00:18:38.237 "data_size": 0 00:18:38.237 }, 00:18:38.237 { 00:18:38.237 "name": "BaseBdev4", 00:18:38.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.237 "is_configured": false, 00:18:38.237 "data_offset": 0, 00:18:38.237 "data_size": 0 00:18:38.237 } 00:18:38.237 ] 00:18:38.237 }' 00:18:38.237 05:15:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:38.237 05:15:57 -- common/autotest_common.sh@10 -- # set +x 00:18:38.495 05:15:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:38.754 [2024-07-26 05:15:57.814639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:38.754 BaseBdev3 00:18:38.754 05:15:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:38.754 05:15:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:38.754 05:15:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:38.754 05:15:57 -- common/autotest_common.sh@889 -- # local i 00:18:38.754 05:15:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:38.754 05:15:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:38.754 05:15:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:39.012 05:15:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:39.270 [ 00:18:39.270 { 00:18:39.270 "name": "BaseBdev3", 00:18:39.270 "aliases": [ 00:18:39.270 "093b008c-3b60-48c9-b303-aa676a0777a5" 00:18:39.270 ], 00:18:39.270 "product_name": "Malloc disk", 00:18:39.270 "block_size": 512, 00:18:39.270 "num_blocks": 65536, 00:18:39.270 "uuid": "093b008c-3b60-48c9-b303-aa676a0777a5", 00:18:39.270 "assigned_rate_limits": { 00:18:39.270 "rw_ios_per_sec": 0, 00:18:39.270 "rw_mbytes_per_sec": 0, 00:18:39.270 "r_mbytes_per_sec": 0, 00:18:39.270 "w_mbytes_per_sec": 0 00:18:39.270 }, 00:18:39.270 "claimed": true, 00:18:39.270 "claim_type": "exclusive_write", 00:18:39.270 "zoned": false, 00:18:39.270 "supported_io_types": { 00:18:39.270 "read": true, 00:18:39.270 "write": true, 00:18:39.270 "unmap": true, 00:18:39.270 "write_zeroes": true, 00:18:39.270 "flush": true, 00:18:39.270 "reset": true, 00:18:39.270 "compare": false, 00:18:39.270 "compare_and_write": false, 00:18:39.270 "abort": true, 00:18:39.270 "nvme_admin": false, 00:18:39.270 "nvme_io": false 00:18:39.270 }, 00:18:39.270 "memory_domains": [ 00:18:39.270 { 00:18:39.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.270 "dma_device_type": 2 00:18:39.270 } 00:18:39.270 ], 00:18:39.270 "driver_specific": {} 00:18:39.270 } 00:18:39.270 ] 00:18:39.270 05:15:58 -- common/autotest_common.sh@895 -- # return 0 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.270 05:15:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.528 05:15:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:39.528 "name": "Existed_Raid", 00:18:39.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.528 "strip_size_kb": 64, 00:18:39.528 "state": "configuring", 00:18:39.528 "raid_level": "concat", 00:18:39.528 "superblock": false, 00:18:39.528 "num_base_bdevs": 4, 00:18:39.528 "num_base_bdevs_discovered": 3, 00:18:39.528 "num_base_bdevs_operational": 4, 00:18:39.528 "base_bdevs_list": [ 00:18:39.528 { 00:18:39.528 "name": "BaseBdev1", 00:18:39.528 "uuid": "fc120dd1-9d75-4eaf-ab64-91569510c5f7", 00:18:39.528 "is_configured": true, 00:18:39.528 "data_offset": 0, 00:18:39.528 "data_size": 65536 00:18:39.528 }, 00:18:39.528 { 00:18:39.528 "name": "BaseBdev2", 00:18:39.528 "uuid": "df0ab621-ede4-4f4f-9a79-273def1076f6", 00:18:39.528 "is_configured": true, 00:18:39.528 "data_offset": 0, 00:18:39.528 "data_size": 65536 00:18:39.528 }, 00:18:39.528 { 00:18:39.528 "name": "BaseBdev3", 00:18:39.528 "uuid": "093b008c-3b60-48c9-b303-aa676a0777a5", 00:18:39.528 "is_configured": true, 00:18:39.528 "data_offset": 0, 00:18:39.528 "data_size": 65536 00:18:39.528 }, 00:18:39.528 { 00:18:39.528 "name": "BaseBdev4", 00:18:39.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.528 "is_configured": false, 00:18:39.528 "data_offset": 0, 00:18:39.528 "data_size": 0 00:18:39.528 } 00:18:39.528 ] 00:18:39.528 }' 00:18:39.528 05:15:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:39.528 05:15:58 -- common/autotest_common.sh@10 -- # set +x 00:18:39.786 05:15:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:40.045 [2024-07-26 05:15:59.077885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:40.045 [2024-07-26 05:15:59.077985] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:18:40.045 [2024-07-26 05:15:59.078007] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:40.045 [2024-07-26 05:15:59.078182] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:18:40.045 [2024-07-26 05:15:59.078586] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:18:40.045 [2024-07-26 05:15:59.078618] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:18:40.045 [2024-07-26 05:15:59.078881] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.045 BaseBdev4 00:18:40.045 05:15:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:40.045 05:15:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:40.045 05:15:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:40.045 05:15:59 -- common/autotest_common.sh@889 -- # local i 00:18:40.045 05:15:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:40.045 05:15:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:40.045 05:15:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:40.303 05:15:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:40.562 [ 00:18:40.562 { 00:18:40.562 "name": "BaseBdev4", 00:18:40.562 "aliases": [ 00:18:40.562 "2bbd7174-9de5-4c64-b3ed-b8de020586ab" 00:18:40.562 ], 00:18:40.562 "product_name": "Malloc disk", 00:18:40.562 "block_size": 512, 00:18:40.562 "num_blocks": 65536, 00:18:40.562 "uuid": "2bbd7174-9de5-4c64-b3ed-b8de020586ab", 00:18:40.562 "assigned_rate_limits": { 00:18:40.562 "rw_ios_per_sec": 0, 00:18:40.562 "rw_mbytes_per_sec": 0, 00:18:40.562 "r_mbytes_per_sec": 0, 00:18:40.562 "w_mbytes_per_sec": 0 00:18:40.562 }, 00:18:40.562 "claimed": true, 00:18:40.562 "claim_type": "exclusive_write", 00:18:40.562 "zoned": false, 00:18:40.562 "supported_io_types": { 00:18:40.562 "read": true, 00:18:40.562 "write": true, 00:18:40.562 "unmap": true, 00:18:40.562 "write_zeroes": true, 00:18:40.562 "flush": true, 00:18:40.562 "reset": true, 00:18:40.562 "compare": false, 00:18:40.562 "compare_and_write": false, 00:18:40.562 "abort": true, 00:18:40.562 "nvme_admin": false, 00:18:40.562 "nvme_io": false 00:18:40.562 }, 00:18:40.562 "memory_domains": [ 00:18:40.562 { 00:18:40.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.562 "dma_device_type": 2 00:18:40.562 } 00:18:40.562 ], 00:18:40.562 "driver_specific": {} 00:18:40.562 } 00:18:40.562 ] 00:18:40.562 05:15:59 -- common/autotest_common.sh@895 -- # return 0 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.562 05:15:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.820 05:15:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.820 "name": "Existed_Raid", 00:18:40.820 "uuid": "5bbe9c3b-3560-44f0-9f4b-e4dac0ac0206", 00:18:40.820 "strip_size_kb": 64, 00:18:40.820 "state": "online", 00:18:40.820 "raid_level": "concat", 00:18:40.820 "superblock": false, 00:18:40.821 "num_base_bdevs": 4, 00:18:40.821 "num_base_bdevs_discovered": 4, 00:18:40.821 "num_base_bdevs_operational": 4, 00:18:40.821 "base_bdevs_list": [ 00:18:40.821 { 00:18:40.821 "name": "BaseBdev1", 00:18:40.821 "uuid": "fc120dd1-9d75-4eaf-ab64-91569510c5f7", 00:18:40.821 "is_configured": true, 00:18:40.821 "data_offset": 0, 00:18:40.821 "data_size": 65536 00:18:40.821 }, 00:18:40.821 { 00:18:40.821 "name": "BaseBdev2", 00:18:40.821 "uuid": "df0ab621-ede4-4f4f-9a79-273def1076f6", 00:18:40.821 "is_configured": true, 00:18:40.821 "data_offset": 0, 00:18:40.821 "data_size": 65536 00:18:40.821 }, 00:18:40.821 { 00:18:40.821 "name": "BaseBdev3", 00:18:40.821 "uuid": "093b008c-3b60-48c9-b303-aa676a0777a5", 00:18:40.821 "is_configured": true, 00:18:40.821 "data_offset": 0, 00:18:40.821 "data_size": 65536 00:18:40.821 }, 00:18:40.821 { 00:18:40.821 "name": "BaseBdev4", 00:18:40.821 "uuid": "2bbd7174-9de5-4c64-b3ed-b8de020586ab", 00:18:40.821 "is_configured": true, 00:18:40.821 "data_offset": 0, 00:18:40.821 "data_size": 65536 00:18:40.821 } 00:18:40.821 ] 00:18:40.821 }' 00:18:40.821 05:15:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.821 05:15:59 -- common/autotest_common.sh@10 -- # set +x 00:18:41.079 05:16:00 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:41.337 [2024-07-26 05:16:00.346405] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.337 [2024-07-26 05:16:00.346443] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.337 [2024-07-26 05:16:00.346504] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.337 05:16:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.595 05:16:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.595 05:16:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.596 "name": "Existed_Raid", 00:18:41.596 "uuid": "5bbe9c3b-3560-44f0-9f4b-e4dac0ac0206", 00:18:41.596 "strip_size_kb": 64, 00:18:41.596 "state": "offline", 00:18:41.596 "raid_level": "concat", 00:18:41.596 "superblock": false, 00:18:41.596 "num_base_bdevs": 4, 00:18:41.596 "num_base_bdevs_discovered": 3, 00:18:41.596 "num_base_bdevs_operational": 3, 00:18:41.596 "base_bdevs_list": [ 00:18:41.596 { 00:18:41.596 "name": null, 00:18:41.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.596 "is_configured": false, 00:18:41.596 "data_offset": 0, 00:18:41.596 "data_size": 65536 00:18:41.596 }, 00:18:41.596 { 00:18:41.596 "name": "BaseBdev2", 00:18:41.596 "uuid": "df0ab621-ede4-4f4f-9a79-273def1076f6", 00:18:41.596 "is_configured": true, 00:18:41.596 "data_offset": 0, 00:18:41.596 "data_size": 65536 00:18:41.596 }, 00:18:41.596 { 00:18:41.596 "name": "BaseBdev3", 00:18:41.596 "uuid": "093b008c-3b60-48c9-b303-aa676a0777a5", 00:18:41.596 "is_configured": true, 00:18:41.596 "data_offset": 0, 00:18:41.596 "data_size": 65536 00:18:41.596 }, 00:18:41.596 { 00:18:41.596 "name": "BaseBdev4", 00:18:41.596 "uuid": "2bbd7174-9de5-4c64-b3ed-b8de020586ab", 00:18:41.596 "is_configured": true, 00:18:41.596 "data_offset": 0, 00:18:41.596 "data_size": 65536 00:18:41.596 } 00:18:41.596 ] 00:18:41.596 }' 00:18:41.596 05:16:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.596 05:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:41.854 05:16:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:41.854 05:16:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:41.854 05:16:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.854 05:16:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:42.111 05:16:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:42.111 05:16:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.111 05:16:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:42.369 [2024-07-26 05:16:01.399311] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:42.627 05:16:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:42.627 05:16:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:42.627 05:16:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.628 05:16:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:42.886 05:16:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:42.886 05:16:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.886 05:16:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:42.886 [2024-07-26 05:16:01.945963] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:43.144 05:16:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:43.144 05:16:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:43.144 05:16:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.144 05:16:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:43.403 05:16:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:43.403 05:16:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:43.403 05:16:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:43.661 [2024-07-26 05:16:02.553382] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:43.661 [2024-07-26 05:16:02.553444] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:18:43.661 05:16:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:43.661 05:16:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:43.661 05:16:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.661 05:16:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:43.920 05:16:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:43.920 05:16:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:43.920 05:16:02 -- bdev/bdev_raid.sh@287 -- # killprocess 75609 00:18:43.920 05:16:02 -- common/autotest_common.sh@926 -- # '[' -z 75609 ']' 00:18:43.920 05:16:02 -- common/autotest_common.sh@930 -- # kill -0 75609 00:18:43.920 05:16:02 -- common/autotest_common.sh@931 -- # uname 00:18:43.920 05:16:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:43.920 05:16:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75609 00:18:43.920 killing process with pid 75609 00:18:43.920 05:16:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:43.920 05:16:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:43.920 05:16:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75609' 00:18:43.920 05:16:02 -- common/autotest_common.sh@945 -- # kill 75609 00:18:43.920 [2024-07-26 05:16:02.905391] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:43.920 05:16:02 -- common/autotest_common.sh@950 -- # wait 75609 00:18:43.920 [2024-07-26 05:16:02.905522] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:45.295 00:18:45.295 real 0m12.241s 00:18:45.295 user 0m20.540s 00:18:45.295 sys 0m1.790s 00:18:45.295 05:16:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.295 ************************************ 00:18:45.295 END TEST raid_state_function_test 00:18:45.295 ************************************ 00:18:45.295 05:16:04 -- common/autotest_common.sh@10 -- # set +x 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:45.295 05:16:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:45.295 05:16:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:45.295 05:16:04 -- common/autotest_common.sh@10 -- # set +x 00:18:45.295 ************************************ 00:18:45.295 START TEST raid_state_function_test_sb 00:18:45.295 ************************************ 00:18:45.295 05:16:04 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:45.295 05:16:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:45.296 05:16:04 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:45.296 05:16:04 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:45.296 05:16:04 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:45.296 05:16:04 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:45.296 05:16:04 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:45.296 Process raid pid: 76003 00:18:45.296 05:16:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=76003 00:18:45.296 05:16:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 76003' 00:18:45.296 05:16:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 76003 /var/tmp/spdk-raid.sock 00:18:45.296 05:16:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:45.296 05:16:04 -- common/autotest_common.sh@819 -- # '[' -z 76003 ']' 00:18:45.296 05:16:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:45.296 05:16:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:45.296 05:16:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:45.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:45.296 05:16:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:45.296 05:16:04 -- common/autotest_common.sh@10 -- # set +x 00:18:45.296 [2024-07-26 05:16:04.153278] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:45.296 [2024-07-26 05:16:04.153586] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.296 [2024-07-26 05:16:04.331037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.554 [2024-07-26 05:16:04.508625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.554 [2024-07-26 05:16:04.658773] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.121 05:16:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:46.122 05:16:05 -- common/autotest_common.sh@852 -- # return 0 00:18:46.122 05:16:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:46.380 [2024-07-26 05:16:05.378518] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:46.380 [2024-07-26 05:16:05.378631] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:46.380 [2024-07-26 05:16:05.378647] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:46.380 [2024-07-26 05:16:05.378663] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:46.380 [2024-07-26 05:16:05.378672] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:46.380 [2024-07-26 05:16:05.378685] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:46.380 [2024-07-26 05:16:05.378694] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:46.380 [2024-07-26 05:16:05.378706] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.380 05:16:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.639 05:16:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.639 "name": "Existed_Raid", 00:18:46.639 "uuid": "f4de07be-e3f0-4111-9b64-273ded1e6fde", 00:18:46.639 "strip_size_kb": 64, 00:18:46.639 "state": "configuring", 00:18:46.639 "raid_level": "concat", 00:18:46.639 "superblock": true, 00:18:46.639 "num_base_bdevs": 4, 00:18:46.639 "num_base_bdevs_discovered": 0, 00:18:46.639 "num_base_bdevs_operational": 4, 00:18:46.639 "base_bdevs_list": [ 00:18:46.639 { 00:18:46.639 "name": "BaseBdev1", 00:18:46.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.639 "is_configured": false, 00:18:46.639 "data_offset": 0, 00:18:46.639 "data_size": 0 00:18:46.639 }, 00:18:46.639 { 00:18:46.639 "name": "BaseBdev2", 00:18:46.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.639 "is_configured": false, 00:18:46.639 "data_offset": 0, 00:18:46.639 "data_size": 0 00:18:46.639 }, 00:18:46.639 { 00:18:46.639 "name": "BaseBdev3", 00:18:46.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.639 "is_configured": false, 00:18:46.639 "data_offset": 0, 00:18:46.639 "data_size": 0 00:18:46.639 }, 00:18:46.639 { 00:18:46.639 "name": "BaseBdev4", 00:18:46.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.639 "is_configured": false, 00:18:46.639 "data_offset": 0, 00:18:46.639 "data_size": 0 00:18:46.639 } 00:18:46.639 ] 00:18:46.639 }' 00:18:46.639 05:16:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.639 05:16:05 -- common/autotest_common.sh@10 -- # set +x 00:18:46.897 05:16:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:47.156 [2024-07-26 05:16:06.194557] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.156 [2024-07-26 05:16:06.194606] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:18:47.156 05:16:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:47.414 [2024-07-26 05:16:06.422782] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:47.414 [2024-07-26 05:16:06.422844] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:47.414 [2024-07-26 05:16:06.422860] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.414 [2024-07-26 05:16:06.422876] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.414 [2024-07-26 05:16:06.422885] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:47.414 [2024-07-26 05:16:06.422898] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:47.414 [2024-07-26 05:16:06.422923] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:47.414 [2024-07-26 05:16:06.422936] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:47.414 05:16:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:47.673 [2024-07-26 05:16:06.679124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.673 BaseBdev1 00:18:47.673 05:16:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:47.673 05:16:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:47.673 05:16:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:47.673 05:16:06 -- common/autotest_common.sh@889 -- # local i 00:18:47.673 05:16:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:47.673 05:16:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:47.673 05:16:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.932 05:16:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:48.190 [ 00:18:48.190 { 00:18:48.191 "name": "BaseBdev1", 00:18:48.191 "aliases": [ 00:18:48.191 "94159205-d70f-4906-bd6d-efc8e4f33511" 00:18:48.191 ], 00:18:48.191 "product_name": "Malloc disk", 00:18:48.191 "block_size": 512, 00:18:48.191 "num_blocks": 65536, 00:18:48.191 "uuid": "94159205-d70f-4906-bd6d-efc8e4f33511", 00:18:48.191 "assigned_rate_limits": { 00:18:48.191 "rw_ios_per_sec": 0, 00:18:48.191 "rw_mbytes_per_sec": 0, 00:18:48.191 "r_mbytes_per_sec": 0, 00:18:48.191 "w_mbytes_per_sec": 0 00:18:48.191 }, 00:18:48.191 "claimed": true, 00:18:48.191 "claim_type": "exclusive_write", 00:18:48.191 "zoned": false, 00:18:48.191 "supported_io_types": { 00:18:48.191 "read": true, 00:18:48.191 "write": true, 00:18:48.191 "unmap": true, 00:18:48.191 "write_zeroes": true, 00:18:48.191 "flush": true, 00:18:48.191 "reset": true, 00:18:48.191 "compare": false, 00:18:48.191 "compare_and_write": false, 00:18:48.191 "abort": true, 00:18:48.191 "nvme_admin": false, 00:18:48.191 "nvme_io": false 00:18:48.191 }, 00:18:48.191 "memory_domains": [ 00:18:48.191 { 00:18:48.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.191 "dma_device_type": 2 00:18:48.191 } 00:18:48.191 ], 00:18:48.191 "driver_specific": {} 00:18:48.191 } 00:18:48.191 ] 00:18:48.191 05:16:07 -- common/autotest_common.sh@895 -- # return 0 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.191 05:16:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.450 05:16:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.450 "name": "Existed_Raid", 00:18:48.450 "uuid": "9091d4f6-770c-4fef-8658-b38a02d24e99", 00:18:48.450 "strip_size_kb": 64, 00:18:48.450 "state": "configuring", 00:18:48.450 "raid_level": "concat", 00:18:48.450 "superblock": true, 00:18:48.450 "num_base_bdevs": 4, 00:18:48.450 "num_base_bdevs_discovered": 1, 00:18:48.450 "num_base_bdevs_operational": 4, 00:18:48.450 "base_bdevs_list": [ 00:18:48.450 { 00:18:48.450 "name": "BaseBdev1", 00:18:48.450 "uuid": "94159205-d70f-4906-bd6d-efc8e4f33511", 00:18:48.450 "is_configured": true, 00:18:48.450 "data_offset": 2048, 00:18:48.450 "data_size": 63488 00:18:48.450 }, 00:18:48.450 { 00:18:48.450 "name": "BaseBdev2", 00:18:48.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.450 "is_configured": false, 00:18:48.450 "data_offset": 0, 00:18:48.450 "data_size": 0 00:18:48.450 }, 00:18:48.450 { 00:18:48.450 "name": "BaseBdev3", 00:18:48.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.450 "is_configured": false, 00:18:48.450 "data_offset": 0, 00:18:48.450 "data_size": 0 00:18:48.450 }, 00:18:48.450 { 00:18:48.450 "name": "BaseBdev4", 00:18:48.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.450 "is_configured": false, 00:18:48.450 "data_offset": 0, 00:18:48.450 "data_size": 0 00:18:48.450 } 00:18:48.450 ] 00:18:48.450 }' 00:18:48.450 05:16:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.450 05:16:07 -- common/autotest_common.sh@10 -- # set +x 00:18:48.709 05:16:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:48.968 [2024-07-26 05:16:07.923639] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:48.968 [2024-07-26 05:16:07.923714] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:18:48.968 05:16:07 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:48.968 05:16:07 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:49.226 05:16:08 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:49.485 BaseBdev1 00:18:49.485 05:16:08 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:49.485 05:16:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:49.485 05:16:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:49.485 05:16:08 -- common/autotest_common.sh@889 -- # local i 00:18:49.485 05:16:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:49.485 05:16:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:49.485 05:16:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.743 05:16:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:50.002 [ 00:18:50.002 { 00:18:50.002 "name": "BaseBdev1", 00:18:50.002 "aliases": [ 00:18:50.002 "9328f664-10e3-4e43-8b1f-d1431b2d36db" 00:18:50.002 ], 00:18:50.002 "product_name": "Malloc disk", 00:18:50.002 "block_size": 512, 00:18:50.002 "num_blocks": 65536, 00:18:50.002 "uuid": "9328f664-10e3-4e43-8b1f-d1431b2d36db", 00:18:50.002 "assigned_rate_limits": { 00:18:50.002 "rw_ios_per_sec": 0, 00:18:50.002 "rw_mbytes_per_sec": 0, 00:18:50.002 "r_mbytes_per_sec": 0, 00:18:50.002 "w_mbytes_per_sec": 0 00:18:50.002 }, 00:18:50.002 "claimed": false, 00:18:50.002 "zoned": false, 00:18:50.002 "supported_io_types": { 00:18:50.002 "read": true, 00:18:50.002 "write": true, 00:18:50.002 "unmap": true, 00:18:50.002 "write_zeroes": true, 00:18:50.002 "flush": true, 00:18:50.002 "reset": true, 00:18:50.002 "compare": false, 00:18:50.002 "compare_and_write": false, 00:18:50.002 "abort": true, 00:18:50.002 "nvme_admin": false, 00:18:50.002 "nvme_io": false 00:18:50.002 }, 00:18:50.002 "memory_domains": [ 00:18:50.002 { 00:18:50.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.002 "dma_device_type": 2 00:18:50.002 } 00:18:50.002 ], 00:18:50.002 "driver_specific": {} 00:18:50.002 } 00:18:50.002 ] 00:18:50.002 05:16:09 -- common/autotest_common.sh@895 -- # return 0 00:18:50.002 05:16:09 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:50.261 [2024-07-26 05:16:09.201582] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.262 [2024-07-26 05:16:09.203862] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.262 [2024-07-26 05:16:09.203929] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.262 [2024-07-26 05:16:09.203943] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.262 [2024-07-26 05:16:09.203958] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.262 [2024-07-26 05:16:09.203966] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:50.262 [2024-07-26 05:16:09.203980] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.262 05:16:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.520 05:16:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.520 "name": "Existed_Raid", 00:18:50.520 "uuid": "a85b7c7f-e0c7-4093-aa57-1b303d89cff8", 00:18:50.520 "strip_size_kb": 64, 00:18:50.520 "state": "configuring", 00:18:50.520 "raid_level": "concat", 00:18:50.520 "superblock": true, 00:18:50.520 "num_base_bdevs": 4, 00:18:50.520 "num_base_bdevs_discovered": 1, 00:18:50.520 "num_base_bdevs_operational": 4, 00:18:50.520 "base_bdevs_list": [ 00:18:50.520 { 00:18:50.520 "name": "BaseBdev1", 00:18:50.520 "uuid": "9328f664-10e3-4e43-8b1f-d1431b2d36db", 00:18:50.520 "is_configured": true, 00:18:50.520 "data_offset": 2048, 00:18:50.520 "data_size": 63488 00:18:50.520 }, 00:18:50.520 { 00:18:50.520 "name": "BaseBdev2", 00:18:50.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.520 "is_configured": false, 00:18:50.520 "data_offset": 0, 00:18:50.520 "data_size": 0 00:18:50.520 }, 00:18:50.520 { 00:18:50.520 "name": "BaseBdev3", 00:18:50.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.520 "is_configured": false, 00:18:50.520 "data_offset": 0, 00:18:50.520 "data_size": 0 00:18:50.520 }, 00:18:50.520 { 00:18:50.520 "name": "BaseBdev4", 00:18:50.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.520 "is_configured": false, 00:18:50.520 "data_offset": 0, 00:18:50.520 "data_size": 0 00:18:50.520 } 00:18:50.520 ] 00:18:50.520 }' 00:18:50.520 05:16:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.520 05:16:09 -- common/autotest_common.sh@10 -- # set +x 00:18:50.794 05:16:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:51.057 [2024-07-26 05:16:09.994380] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.057 BaseBdev2 00:18:51.057 05:16:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:51.057 05:16:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:51.057 05:16:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:51.057 05:16:10 -- common/autotest_common.sh@889 -- # local i 00:18:51.057 05:16:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:51.057 05:16:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:51.057 05:16:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:51.317 05:16:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:51.577 [ 00:18:51.577 { 00:18:51.577 "name": "BaseBdev2", 00:18:51.577 "aliases": [ 00:18:51.577 "e53b40bb-b95b-46bf-bd53-9f3905452627" 00:18:51.577 ], 00:18:51.577 "product_name": "Malloc disk", 00:18:51.577 "block_size": 512, 00:18:51.577 "num_blocks": 65536, 00:18:51.577 "uuid": "e53b40bb-b95b-46bf-bd53-9f3905452627", 00:18:51.577 "assigned_rate_limits": { 00:18:51.577 "rw_ios_per_sec": 0, 00:18:51.577 "rw_mbytes_per_sec": 0, 00:18:51.577 "r_mbytes_per_sec": 0, 00:18:51.577 "w_mbytes_per_sec": 0 00:18:51.577 }, 00:18:51.577 "claimed": true, 00:18:51.577 "claim_type": "exclusive_write", 00:18:51.577 "zoned": false, 00:18:51.577 "supported_io_types": { 00:18:51.577 "read": true, 00:18:51.577 "write": true, 00:18:51.577 "unmap": true, 00:18:51.577 "write_zeroes": true, 00:18:51.577 "flush": true, 00:18:51.577 "reset": true, 00:18:51.577 "compare": false, 00:18:51.577 "compare_and_write": false, 00:18:51.577 "abort": true, 00:18:51.577 "nvme_admin": false, 00:18:51.577 "nvme_io": false 00:18:51.577 }, 00:18:51.577 "memory_domains": [ 00:18:51.577 { 00:18:51.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.577 "dma_device_type": 2 00:18:51.577 } 00:18:51.577 ], 00:18:51.577 "driver_specific": {} 00:18:51.577 } 00:18:51.577 ] 00:18:51.577 05:16:10 -- common/autotest_common.sh@895 -- # return 0 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.577 05:16:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.836 05:16:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.836 "name": "Existed_Raid", 00:18:51.836 "uuid": "a85b7c7f-e0c7-4093-aa57-1b303d89cff8", 00:18:51.836 "strip_size_kb": 64, 00:18:51.836 "state": "configuring", 00:18:51.836 "raid_level": "concat", 00:18:51.836 "superblock": true, 00:18:51.836 "num_base_bdevs": 4, 00:18:51.836 "num_base_bdevs_discovered": 2, 00:18:51.836 "num_base_bdevs_operational": 4, 00:18:51.836 "base_bdevs_list": [ 00:18:51.836 { 00:18:51.836 "name": "BaseBdev1", 00:18:51.836 "uuid": "9328f664-10e3-4e43-8b1f-d1431b2d36db", 00:18:51.836 "is_configured": true, 00:18:51.836 "data_offset": 2048, 00:18:51.836 "data_size": 63488 00:18:51.836 }, 00:18:51.836 { 00:18:51.836 "name": "BaseBdev2", 00:18:51.836 "uuid": "e53b40bb-b95b-46bf-bd53-9f3905452627", 00:18:51.836 "is_configured": true, 00:18:51.836 "data_offset": 2048, 00:18:51.836 "data_size": 63488 00:18:51.836 }, 00:18:51.836 { 00:18:51.836 "name": "BaseBdev3", 00:18:51.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.836 "is_configured": false, 00:18:51.836 "data_offset": 0, 00:18:51.836 "data_size": 0 00:18:51.836 }, 00:18:51.836 { 00:18:51.836 "name": "BaseBdev4", 00:18:51.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.836 "is_configured": false, 00:18:51.836 "data_offset": 0, 00:18:51.836 "data_size": 0 00:18:51.836 } 00:18:51.836 ] 00:18:51.836 }' 00:18:51.836 05:16:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.836 05:16:10 -- common/autotest_common.sh@10 -- # set +x 00:18:52.094 05:16:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.363 [2024-07-26 05:16:11.299377] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.363 BaseBdev3 00:18:52.363 05:16:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:52.363 05:16:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:52.363 05:16:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:52.363 05:16:11 -- common/autotest_common.sh@889 -- # local i 00:18:52.363 05:16:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:52.363 05:16:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:52.363 05:16:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.634 05:16:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:52.893 [ 00:18:52.893 { 00:18:52.893 "name": "BaseBdev3", 00:18:52.893 "aliases": [ 00:18:52.893 "e6cc3b4d-7f98-47aa-9a1e-98371d180538" 00:18:52.893 ], 00:18:52.893 "product_name": "Malloc disk", 00:18:52.893 "block_size": 512, 00:18:52.893 "num_blocks": 65536, 00:18:52.893 "uuid": "e6cc3b4d-7f98-47aa-9a1e-98371d180538", 00:18:52.893 "assigned_rate_limits": { 00:18:52.893 "rw_ios_per_sec": 0, 00:18:52.893 "rw_mbytes_per_sec": 0, 00:18:52.893 "r_mbytes_per_sec": 0, 00:18:52.893 "w_mbytes_per_sec": 0 00:18:52.893 }, 00:18:52.893 "claimed": true, 00:18:52.893 "claim_type": "exclusive_write", 00:18:52.893 "zoned": false, 00:18:52.893 "supported_io_types": { 00:18:52.893 "read": true, 00:18:52.893 "write": true, 00:18:52.893 "unmap": true, 00:18:52.893 "write_zeroes": true, 00:18:52.893 "flush": true, 00:18:52.893 "reset": true, 00:18:52.893 "compare": false, 00:18:52.893 "compare_and_write": false, 00:18:52.893 "abort": true, 00:18:52.893 "nvme_admin": false, 00:18:52.893 "nvme_io": false 00:18:52.893 }, 00:18:52.893 "memory_domains": [ 00:18:52.893 { 00:18:52.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.893 "dma_device_type": 2 00:18:52.893 } 00:18:52.893 ], 00:18:52.893 "driver_specific": {} 00:18:52.893 } 00:18:52.893 ] 00:18:52.893 05:16:11 -- common/autotest_common.sh@895 -- # return 0 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.893 "name": "Existed_Raid", 00:18:52.893 "uuid": "a85b7c7f-e0c7-4093-aa57-1b303d89cff8", 00:18:52.893 "strip_size_kb": 64, 00:18:52.893 "state": "configuring", 00:18:52.893 "raid_level": "concat", 00:18:52.893 "superblock": true, 00:18:52.893 "num_base_bdevs": 4, 00:18:52.893 "num_base_bdevs_discovered": 3, 00:18:52.893 "num_base_bdevs_operational": 4, 00:18:52.893 "base_bdevs_list": [ 00:18:52.893 { 00:18:52.893 "name": "BaseBdev1", 00:18:52.893 "uuid": "9328f664-10e3-4e43-8b1f-d1431b2d36db", 00:18:52.893 "is_configured": true, 00:18:52.893 "data_offset": 2048, 00:18:52.893 "data_size": 63488 00:18:52.893 }, 00:18:52.893 { 00:18:52.893 "name": "BaseBdev2", 00:18:52.893 "uuid": "e53b40bb-b95b-46bf-bd53-9f3905452627", 00:18:52.893 "is_configured": true, 00:18:52.893 "data_offset": 2048, 00:18:52.893 "data_size": 63488 00:18:52.893 }, 00:18:52.893 { 00:18:52.893 "name": "BaseBdev3", 00:18:52.893 "uuid": "e6cc3b4d-7f98-47aa-9a1e-98371d180538", 00:18:52.893 "is_configured": true, 00:18:52.893 "data_offset": 2048, 00:18:52.893 "data_size": 63488 00:18:52.893 }, 00:18:52.893 { 00:18:52.893 "name": "BaseBdev4", 00:18:52.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.893 "is_configured": false, 00:18:52.893 "data_offset": 0, 00:18:52.893 "data_size": 0 00:18:52.893 } 00:18:52.893 ] 00:18:52.893 }' 00:18:52.893 05:16:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.893 05:16:11 -- common/autotest_common.sh@10 -- # set +x 00:18:53.460 05:16:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:53.460 [2024-07-26 05:16:12.550166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:53.460 [2024-07-26 05:16:12.550603] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:18:53.460 [2024-07-26 05:16:12.550756] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:53.460 [2024-07-26 05:16:12.551009] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:53.460 [2024-07-26 05:16:12.551518] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:18:53.460 BaseBdev4 00:18:53.460 [2024-07-26 05:16:12.551675] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:18:53.460 [2024-07-26 05:16:12.551955] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.460 05:16:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:53.460 05:16:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:53.460 05:16:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:53.718 05:16:12 -- common/autotest_common.sh@889 -- # local i 00:18:53.718 05:16:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:53.718 05:16:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:53.718 05:16:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:53.977 05:16:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:54.235 [ 00:18:54.235 { 00:18:54.235 "name": "BaseBdev4", 00:18:54.235 "aliases": [ 00:18:54.235 "17daa1b4-8e51-4e14-93ba-537d2a864d50" 00:18:54.235 ], 00:18:54.235 "product_name": "Malloc disk", 00:18:54.235 "block_size": 512, 00:18:54.235 "num_blocks": 65536, 00:18:54.235 "uuid": "17daa1b4-8e51-4e14-93ba-537d2a864d50", 00:18:54.235 "assigned_rate_limits": { 00:18:54.235 "rw_ios_per_sec": 0, 00:18:54.235 "rw_mbytes_per_sec": 0, 00:18:54.235 "r_mbytes_per_sec": 0, 00:18:54.235 "w_mbytes_per_sec": 0 00:18:54.235 }, 00:18:54.235 "claimed": true, 00:18:54.235 "claim_type": "exclusive_write", 00:18:54.235 "zoned": false, 00:18:54.235 "supported_io_types": { 00:18:54.235 "read": true, 00:18:54.235 "write": true, 00:18:54.235 "unmap": true, 00:18:54.235 "write_zeroes": true, 00:18:54.235 "flush": true, 00:18:54.235 "reset": true, 00:18:54.235 "compare": false, 00:18:54.235 "compare_and_write": false, 00:18:54.235 "abort": true, 00:18:54.235 "nvme_admin": false, 00:18:54.235 "nvme_io": false 00:18:54.235 }, 00:18:54.235 "memory_domains": [ 00:18:54.235 { 00:18:54.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.235 "dma_device_type": 2 00:18:54.235 } 00:18:54.235 ], 00:18:54.235 "driver_specific": {} 00:18:54.235 } 00:18:54.235 ] 00:18:54.235 05:16:13 -- common/autotest_common.sh@895 -- # return 0 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.235 05:16:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.492 05:16:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.492 "name": "Existed_Raid", 00:18:54.492 "uuid": "a85b7c7f-e0c7-4093-aa57-1b303d89cff8", 00:18:54.492 "strip_size_kb": 64, 00:18:54.492 "state": "online", 00:18:54.492 "raid_level": "concat", 00:18:54.492 "superblock": true, 00:18:54.492 "num_base_bdevs": 4, 00:18:54.492 "num_base_bdevs_discovered": 4, 00:18:54.492 "num_base_bdevs_operational": 4, 00:18:54.492 "base_bdevs_list": [ 00:18:54.492 { 00:18:54.492 "name": "BaseBdev1", 00:18:54.492 "uuid": "9328f664-10e3-4e43-8b1f-d1431b2d36db", 00:18:54.492 "is_configured": true, 00:18:54.492 "data_offset": 2048, 00:18:54.492 "data_size": 63488 00:18:54.492 }, 00:18:54.492 { 00:18:54.492 "name": "BaseBdev2", 00:18:54.493 "uuid": "e53b40bb-b95b-46bf-bd53-9f3905452627", 00:18:54.493 "is_configured": true, 00:18:54.493 "data_offset": 2048, 00:18:54.493 "data_size": 63488 00:18:54.493 }, 00:18:54.493 { 00:18:54.493 "name": "BaseBdev3", 00:18:54.493 "uuid": "e6cc3b4d-7f98-47aa-9a1e-98371d180538", 00:18:54.493 "is_configured": true, 00:18:54.493 "data_offset": 2048, 00:18:54.493 "data_size": 63488 00:18:54.493 }, 00:18:54.493 { 00:18:54.493 "name": "BaseBdev4", 00:18:54.493 "uuid": "17daa1b4-8e51-4e14-93ba-537d2a864d50", 00:18:54.493 "is_configured": true, 00:18:54.493 "data_offset": 2048, 00:18:54.493 "data_size": 63488 00:18:54.493 } 00:18:54.493 ] 00:18:54.493 }' 00:18:54.493 05:16:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.493 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:18:54.750 05:16:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:55.007 [2024-07-26 05:16:13.910756] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:55.007 [2024-07-26 05:16:13.910976] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.007 [2024-07-26 05:16:13.911219] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.007 05:16:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.265 05:16:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.265 "name": "Existed_Raid", 00:18:55.265 "uuid": "a85b7c7f-e0c7-4093-aa57-1b303d89cff8", 00:18:55.265 "strip_size_kb": 64, 00:18:55.265 "state": "offline", 00:18:55.265 "raid_level": "concat", 00:18:55.265 "superblock": true, 00:18:55.265 "num_base_bdevs": 4, 00:18:55.265 "num_base_bdevs_discovered": 3, 00:18:55.265 "num_base_bdevs_operational": 3, 00:18:55.265 "base_bdevs_list": [ 00:18:55.265 { 00:18:55.265 "name": null, 00:18:55.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.265 "is_configured": false, 00:18:55.265 "data_offset": 2048, 00:18:55.265 "data_size": 63488 00:18:55.265 }, 00:18:55.265 { 00:18:55.265 "name": "BaseBdev2", 00:18:55.265 "uuid": "e53b40bb-b95b-46bf-bd53-9f3905452627", 00:18:55.265 "is_configured": true, 00:18:55.265 "data_offset": 2048, 00:18:55.265 "data_size": 63488 00:18:55.265 }, 00:18:55.265 { 00:18:55.265 "name": "BaseBdev3", 00:18:55.265 "uuid": "e6cc3b4d-7f98-47aa-9a1e-98371d180538", 00:18:55.265 "is_configured": true, 00:18:55.265 "data_offset": 2048, 00:18:55.265 "data_size": 63488 00:18:55.265 }, 00:18:55.265 { 00:18:55.265 "name": "BaseBdev4", 00:18:55.265 "uuid": "17daa1b4-8e51-4e14-93ba-537d2a864d50", 00:18:55.265 "is_configured": true, 00:18:55.265 "data_offset": 2048, 00:18:55.265 "data_size": 63488 00:18:55.265 } 00:18:55.265 ] 00:18:55.265 }' 00:18:55.265 05:16:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.265 05:16:14 -- common/autotest_common.sh@10 -- # set +x 00:18:55.523 05:16:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:55.523 05:16:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:55.523 05:16:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.523 05:16:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:55.781 05:16:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:55.781 05:16:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:55.781 05:16:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:56.040 [2024-07-26 05:16:15.044455] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:56.040 05:16:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:56.040 05:16:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:56.040 05:16:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:56.040 05:16:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.607 05:16:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:56.607 05:16:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.607 05:16:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:56.607 [2024-07-26 05:16:15.609799] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:56.607 05:16:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:56.607 05:16:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:56.607 05:16:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.607 05:16:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:56.865 05:16:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:56.865 05:16:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.865 05:16:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:57.123 [2024-07-26 05:16:16.129417] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:57.123 [2024-07-26 05:16:16.129499] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:18:57.123 05:16:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:57.123 05:16:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:57.123 05:16:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:57.382 05:16:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.640 05:16:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:57.640 05:16:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:57.640 05:16:16 -- bdev/bdev_raid.sh@287 -- # killprocess 76003 00:18:57.640 05:16:16 -- common/autotest_common.sh@926 -- # '[' -z 76003 ']' 00:18:57.640 05:16:16 -- common/autotest_common.sh@930 -- # kill -0 76003 00:18:57.640 05:16:16 -- common/autotest_common.sh@931 -- # uname 00:18:57.640 05:16:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:57.640 05:16:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76003 00:18:57.640 05:16:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:57.640 05:16:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:57.640 killing process with pid 76003 00:18:57.640 05:16:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76003' 00:18:57.640 05:16:16 -- common/autotest_common.sh@945 -- # kill 76003 00:18:57.640 [2024-07-26 05:16:16.522991] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.640 05:16:16 -- common/autotest_common.sh@950 -- # wait 76003 00:18:57.640 [2024-07-26 05:16:16.523147] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.576 05:16:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:58.576 00:18:58.576 real 0m13.598s 00:18:58.576 user 0m22.893s 00:18:58.576 sys 0m1.962s 00:18:58.576 05:16:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.576 05:16:17 -- common/autotest_common.sh@10 -- # set +x 00:18:58.576 ************************************ 00:18:58.576 END TEST raid_state_function_test_sb 00:18:58.576 ************************************ 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:58.835 05:16:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:58.835 05:16:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:58.835 05:16:17 -- common/autotest_common.sh@10 -- # set +x 00:18:58.835 ************************************ 00:18:58.835 START TEST raid_superblock_test 00:18:58.835 ************************************ 00:18:58.835 05:16:17 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@357 -- # raid_pid=76422 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:58.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:58.835 05:16:17 -- bdev/bdev_raid.sh@358 -- # waitforlisten 76422 /var/tmp/spdk-raid.sock 00:18:58.835 05:16:17 -- common/autotest_common.sh@819 -- # '[' -z 76422 ']' 00:18:58.835 05:16:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:58.835 05:16:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:58.835 05:16:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:58.835 05:16:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:58.835 05:16:17 -- common/autotest_common.sh@10 -- # set +x 00:18:58.835 [2024-07-26 05:16:17.812194] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:58.835 [2024-07-26 05:16:17.812574] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76422 ] 00:18:59.094 [2024-07-26 05:16:17.987117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.094 [2024-07-26 05:16:18.191557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.352 [2024-07-26 05:16:18.390668] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.919 05:16:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:59.919 05:16:18 -- common/autotest_common.sh@852 -- # return 0 00:18:59.919 05:16:18 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:59.919 05:16:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:59.919 05:16:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:59.919 05:16:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:59.919 05:16:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:59.919 05:16:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:59.919 05:16:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:59.919 05:16:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:59.919 05:16:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:00.177 malloc1 00:19:00.177 05:16:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:00.436 [2024-07-26 05:16:19.314221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:00.436 [2024-07-26 05:16:19.314328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.436 [2024-07-26 05:16:19.314369] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:19:00.436 [2024-07-26 05:16:19.314384] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.436 [2024-07-26 05:16:19.316913] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.436 pt1 00:19:00.436 [2024-07-26 05:16:19.317157] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:00.436 05:16:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:00.436 05:16:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:00.436 05:16:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:00.436 05:16:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:00.436 05:16:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:00.436 05:16:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:00.436 05:16:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:00.436 05:16:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:00.436 05:16:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:00.694 malloc2 00:19:00.694 05:16:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:00.952 [2024-07-26 05:16:19.925936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:00.952 [2024-07-26 05:16:19.926078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.953 [2024-07-26 05:16:19.926127] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:19:00.953 [2024-07-26 05:16:19.926157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.953 [2024-07-26 05:16:19.928895] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.953 [2024-07-26 05:16:19.928937] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:00.953 pt2 00:19:00.953 05:16:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:00.953 05:16:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:00.953 05:16:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:00.953 05:16:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:00.953 05:16:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:00.953 05:16:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:00.953 05:16:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:00.953 05:16:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:00.953 05:16:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:01.212 malloc3 00:19:01.212 05:16:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:01.470 [2024-07-26 05:16:20.430195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:01.470 [2024-07-26 05:16:20.430296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.470 [2024-07-26 05:16:20.430334] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:19:01.470 [2024-07-26 05:16:20.430349] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.470 [2024-07-26 05:16:20.433126] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.470 [2024-07-26 05:16:20.433294] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:01.470 pt3 00:19:01.470 05:16:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:01.470 05:16:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:01.470 05:16:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:01.470 05:16:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:01.470 05:16:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:01.470 05:16:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.470 05:16:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.470 05:16:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.470 05:16:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:01.729 malloc4 00:19:01.729 05:16:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:01.988 [2024-07-26 05:16:20.936492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:01.988 [2024-07-26 05:16:20.936731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.988 [2024-07-26 05:16:20.936784] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:19:01.988 [2024-07-26 05:16:20.936801] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.988 [2024-07-26 05:16:20.939602] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.988 [2024-07-26 05:16:20.939648] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:01.988 pt4 00:19:01.988 05:16:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:01.988 05:16:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:01.988 05:16:20 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:02.246 [2024-07-26 05:16:21.164766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.246 [2024-07-26 05:16:21.167092] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.246 [2024-07-26 05:16:21.167198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:02.246 [2024-07-26 05:16:21.167266] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:02.246 [2024-07-26 05:16:21.167539] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:19:02.246 [2024-07-26 05:16:21.167558] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:02.246 [2024-07-26 05:16:21.167697] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:19:02.246 [2024-07-26 05:16:21.168132] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:19:02.247 [2024-07-26 05:16:21.168155] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:19:02.247 [2024-07-26 05:16:21.168338] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.247 05:16:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.505 05:16:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.505 "name": "raid_bdev1", 00:19:02.505 "uuid": "d92cdeb4-409a-49a1-a041-7394288c5a60", 00:19:02.505 "strip_size_kb": 64, 00:19:02.505 "state": "online", 00:19:02.505 "raid_level": "concat", 00:19:02.505 "superblock": true, 00:19:02.505 "num_base_bdevs": 4, 00:19:02.505 "num_base_bdevs_discovered": 4, 00:19:02.505 "num_base_bdevs_operational": 4, 00:19:02.505 "base_bdevs_list": [ 00:19:02.505 { 00:19:02.505 "name": "pt1", 00:19:02.505 "uuid": "3ebff778-230c-5daa-952b-9082a50c9b1a", 00:19:02.505 "is_configured": true, 00:19:02.505 "data_offset": 2048, 00:19:02.505 "data_size": 63488 00:19:02.505 }, 00:19:02.505 { 00:19:02.505 "name": "pt2", 00:19:02.505 "uuid": "b6e964d3-6c3d-55c6-b1c2-eb7a3f90c8ed", 00:19:02.505 "is_configured": true, 00:19:02.505 "data_offset": 2048, 00:19:02.505 "data_size": 63488 00:19:02.505 }, 00:19:02.505 { 00:19:02.505 "name": "pt3", 00:19:02.505 "uuid": "7434cf1f-2a79-5689-be6f-13ec30cc8c02", 00:19:02.505 "is_configured": true, 00:19:02.505 "data_offset": 2048, 00:19:02.505 "data_size": 63488 00:19:02.505 }, 00:19:02.505 { 00:19:02.505 "name": "pt4", 00:19:02.505 "uuid": "a8b665a2-e24e-5da9-8850-1e5e2540bf51", 00:19:02.505 "is_configured": true, 00:19:02.505 "data_offset": 2048, 00:19:02.505 "data_size": 63488 00:19:02.505 } 00:19:02.505 ] 00:19:02.505 }' 00:19:02.505 05:16:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.505 05:16:21 -- common/autotest_common.sh@10 -- # set +x 00:19:02.764 05:16:21 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:02.764 05:16:21 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:03.023 [2024-07-26 05:16:22.069516] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.023 05:16:22 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d92cdeb4-409a-49a1-a041-7394288c5a60 00:19:03.023 05:16:22 -- bdev/bdev_raid.sh@380 -- # '[' -z d92cdeb4-409a-49a1-a041-7394288c5a60 ']' 00:19:03.023 05:16:22 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:03.282 [2024-07-26 05:16:22.337384] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.282 [2024-07-26 05:16:22.337429] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.282 [2024-07-26 05:16:22.337519] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.282 [2024-07-26 05:16:22.337604] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.282 [2024-07-26 05:16:22.337619] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:19:03.282 05:16:22 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.282 05:16:22 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:03.541 05:16:22 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:03.541 05:16:22 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:03.541 05:16:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.541 05:16:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:04.108 05:16:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.108 05:16:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:04.367 05:16:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.367 05:16:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:04.626 05:16:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.626 05:16:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:04.883 05:16:23 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:04.883 05:16:23 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:04.884 05:16:23 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:04.884 05:16:23 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:04.884 05:16:23 -- common/autotest_common.sh@640 -- # local es=0 00:19:04.884 05:16:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:04.884 05:16:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.884 05:16:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:04.884 05:16:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.884 05:16:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:04.884 05:16:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.884 05:16:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:04.884 05:16:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.884 05:16:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:04.884 05:16:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:05.450 [2024-07-26 05:16:24.270214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:05.450 [2024-07-26 05:16:24.272426] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:05.450 [2024-07-26 05:16:24.272510] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:05.450 [2024-07-26 05:16:24.272572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:05.450 [2024-07-26 05:16:24.272634] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:05.450 [2024-07-26 05:16:24.272695] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:05.450 [2024-07-26 05:16:24.272726] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:05.450 [2024-07-26 05:16:24.272750] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:05.450 [2024-07-26 05:16:24.272787] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.450 [2024-07-26 05:16:24.272815] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:19:05.450 request: 00:19:05.450 { 00:19:05.450 "name": "raid_bdev1", 00:19:05.450 "raid_level": "concat", 00:19:05.450 "base_bdevs": [ 00:19:05.450 "malloc1", 00:19:05.450 "malloc2", 00:19:05.450 "malloc3", 00:19:05.450 "malloc4" 00:19:05.450 ], 00:19:05.450 "superblock": false, 00:19:05.450 "strip_size_kb": 64, 00:19:05.450 "method": "bdev_raid_create", 00:19:05.450 "req_id": 1 00:19:05.450 } 00:19:05.450 Got JSON-RPC error response 00:19:05.450 response: 00:19:05.450 { 00:19:05.450 "code": -17, 00:19:05.450 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:05.450 } 00:19:05.450 05:16:24 -- common/autotest_common.sh@643 -- # es=1 00:19:05.450 05:16:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:05.450 05:16:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:05.450 05:16:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:05.450 05:16:24 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.450 05:16:24 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:05.708 05:16:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:05.708 05:16:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:05.709 05:16:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:05.968 [2024-07-26 05:16:24.826444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:05.968 [2024-07-26 05:16:24.826524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.968 [2024-07-26 05:16:24.826558] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:19:05.968 [2024-07-26 05:16:24.826573] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.968 [2024-07-26 05:16:24.829431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.968 [2024-07-26 05:16:24.829476] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:05.968 [2024-07-26 05:16:24.829631] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:05.968 [2024-07-26 05:16:24.829696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:05.968 pt1 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.968 05:16:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.227 05:16:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.227 "name": "raid_bdev1", 00:19:06.227 "uuid": "d92cdeb4-409a-49a1-a041-7394288c5a60", 00:19:06.227 "strip_size_kb": 64, 00:19:06.227 "state": "configuring", 00:19:06.227 "raid_level": "concat", 00:19:06.227 "superblock": true, 00:19:06.227 "num_base_bdevs": 4, 00:19:06.227 "num_base_bdevs_discovered": 1, 00:19:06.227 "num_base_bdevs_operational": 4, 00:19:06.227 "base_bdevs_list": [ 00:19:06.227 { 00:19:06.227 "name": "pt1", 00:19:06.227 "uuid": "3ebff778-230c-5daa-952b-9082a50c9b1a", 00:19:06.227 "is_configured": true, 00:19:06.227 "data_offset": 2048, 00:19:06.227 "data_size": 63488 00:19:06.227 }, 00:19:06.227 { 00:19:06.227 "name": null, 00:19:06.227 "uuid": "b6e964d3-6c3d-55c6-b1c2-eb7a3f90c8ed", 00:19:06.227 "is_configured": false, 00:19:06.227 "data_offset": 2048, 00:19:06.227 "data_size": 63488 00:19:06.227 }, 00:19:06.227 { 00:19:06.227 "name": null, 00:19:06.227 "uuid": "7434cf1f-2a79-5689-be6f-13ec30cc8c02", 00:19:06.227 "is_configured": false, 00:19:06.227 "data_offset": 2048, 00:19:06.227 "data_size": 63488 00:19:06.227 }, 00:19:06.227 { 00:19:06.227 "name": null, 00:19:06.227 "uuid": "a8b665a2-e24e-5da9-8850-1e5e2540bf51", 00:19:06.227 "is_configured": false, 00:19:06.227 "data_offset": 2048, 00:19:06.227 "data_size": 63488 00:19:06.227 } 00:19:06.227 ] 00:19:06.227 }' 00:19:06.227 05:16:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.227 05:16:25 -- common/autotest_common.sh@10 -- # set +x 00:19:06.486 05:16:25 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:06.486 05:16:25 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:06.745 [2024-07-26 05:16:25.726740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:06.745 [2024-07-26 05:16:25.726817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.745 [2024-07-26 05:16:25.726864] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:19:06.745 [2024-07-26 05:16:25.726882] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.745 [2024-07-26 05:16:25.727465] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.745 [2024-07-26 05:16:25.727491] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:06.745 [2024-07-26 05:16:25.727603] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:06.745 [2024-07-26 05:16:25.727631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.745 pt2 00:19:06.745 05:16:25 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:07.003 [2024-07-26 05:16:25.994849] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.003 05:16:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.262 05:16:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:07.262 "name": "raid_bdev1", 00:19:07.262 "uuid": "d92cdeb4-409a-49a1-a041-7394288c5a60", 00:19:07.262 "strip_size_kb": 64, 00:19:07.262 "state": "configuring", 00:19:07.262 "raid_level": "concat", 00:19:07.262 "superblock": true, 00:19:07.262 "num_base_bdevs": 4, 00:19:07.262 "num_base_bdevs_discovered": 1, 00:19:07.262 "num_base_bdevs_operational": 4, 00:19:07.262 "base_bdevs_list": [ 00:19:07.262 { 00:19:07.262 "name": "pt1", 00:19:07.262 "uuid": "3ebff778-230c-5daa-952b-9082a50c9b1a", 00:19:07.262 "is_configured": true, 00:19:07.262 "data_offset": 2048, 00:19:07.262 "data_size": 63488 00:19:07.262 }, 00:19:07.262 { 00:19:07.262 "name": null, 00:19:07.262 "uuid": "b6e964d3-6c3d-55c6-b1c2-eb7a3f90c8ed", 00:19:07.262 "is_configured": false, 00:19:07.262 "data_offset": 2048, 00:19:07.262 "data_size": 63488 00:19:07.262 }, 00:19:07.262 { 00:19:07.262 "name": null, 00:19:07.262 "uuid": "7434cf1f-2a79-5689-be6f-13ec30cc8c02", 00:19:07.262 "is_configured": false, 00:19:07.262 "data_offset": 2048, 00:19:07.262 "data_size": 63488 00:19:07.262 }, 00:19:07.262 { 00:19:07.262 "name": null, 00:19:07.262 "uuid": "a8b665a2-e24e-5da9-8850-1e5e2540bf51", 00:19:07.262 "is_configured": false, 00:19:07.262 "data_offset": 2048, 00:19:07.262 "data_size": 63488 00:19:07.262 } 00:19:07.262 ] 00:19:07.262 }' 00:19:07.262 05:16:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:07.262 05:16:26 -- common/autotest_common.sh@10 -- # set +x 00:19:07.521 05:16:26 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:07.521 05:16:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:07.521 05:16:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.779 [2024-07-26 05:16:26.867287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.779 [2024-07-26 05:16:26.867436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.779 [2024-07-26 05:16:26.867498] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:19:07.779 [2024-07-26 05:16:26.867520] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.779 [2024-07-26 05:16:26.868046] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.779 [2024-07-26 05:16:26.868075] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.779 [2024-07-26 05:16:26.868210] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:07.779 [2024-07-26 05:16:26.868306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.779 pt2 00:19:08.038 05:16:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:08.038 05:16:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:08.038 05:16:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:08.038 [2024-07-26 05:16:27.143348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:08.038 [2024-07-26 05:16:27.143430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.038 [2024-07-26 05:16:27.143459] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:19:08.038 [2024-07-26 05:16:27.143474] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.038 [2024-07-26 05:16:27.143904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.038 [2024-07-26 05:16:27.143945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:08.038 [2024-07-26 05:16:27.144070] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:08.038 [2024-07-26 05:16:27.144108] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:08.296 pt3 00:19:08.296 05:16:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:08.296 05:16:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:08.296 05:16:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:08.555 [2024-07-26 05:16:27.419550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:08.555 [2024-07-26 05:16:27.419663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.555 [2024-07-26 05:16:27.419695] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:19:08.555 [2024-07-26 05:16:27.419712] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.555 [2024-07-26 05:16:27.420275] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.555 [2024-07-26 05:16:27.420306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:08.555 [2024-07-26 05:16:27.420407] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:08.555 [2024-07-26 05:16:27.420441] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:08.555 [2024-07-26 05:16:27.420597] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:19:08.555 [2024-07-26 05:16:27.420617] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:08.555 [2024-07-26 05:16:27.420722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:08.555 [2024-07-26 05:16:27.421181] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:19:08.555 [2024-07-26 05:16:27.421198] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:19:08.555 [2024-07-26 05:16:27.421348] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.555 pt4 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.555 "name": "raid_bdev1", 00:19:08.555 "uuid": "d92cdeb4-409a-49a1-a041-7394288c5a60", 00:19:08.555 "strip_size_kb": 64, 00:19:08.555 "state": "online", 00:19:08.555 "raid_level": "concat", 00:19:08.555 "superblock": true, 00:19:08.555 "num_base_bdevs": 4, 00:19:08.555 "num_base_bdevs_discovered": 4, 00:19:08.555 "num_base_bdevs_operational": 4, 00:19:08.555 "base_bdevs_list": [ 00:19:08.555 { 00:19:08.555 "name": "pt1", 00:19:08.555 "uuid": "3ebff778-230c-5daa-952b-9082a50c9b1a", 00:19:08.555 "is_configured": true, 00:19:08.555 "data_offset": 2048, 00:19:08.555 "data_size": 63488 00:19:08.555 }, 00:19:08.555 { 00:19:08.555 "name": "pt2", 00:19:08.555 "uuid": "b6e964d3-6c3d-55c6-b1c2-eb7a3f90c8ed", 00:19:08.555 "is_configured": true, 00:19:08.555 "data_offset": 2048, 00:19:08.555 "data_size": 63488 00:19:08.555 }, 00:19:08.555 { 00:19:08.555 "name": "pt3", 00:19:08.555 "uuid": "7434cf1f-2a79-5689-be6f-13ec30cc8c02", 00:19:08.555 "is_configured": true, 00:19:08.555 "data_offset": 2048, 00:19:08.555 "data_size": 63488 00:19:08.555 }, 00:19:08.555 { 00:19:08.555 "name": "pt4", 00:19:08.555 "uuid": "a8b665a2-e24e-5da9-8850-1e5e2540bf51", 00:19:08.555 "is_configured": true, 00:19:08.555 "data_offset": 2048, 00:19:08.555 "data_size": 63488 00:19:08.555 } 00:19:08.555 ] 00:19:08.555 }' 00:19:08.555 05:16:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.555 05:16:27 -- common/autotest_common.sh@10 -- # set +x 00:19:09.123 05:16:27 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:09.123 05:16:27 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:09.123 [2024-07-26 05:16:28.220074] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.381 05:16:28 -- bdev/bdev_raid.sh@430 -- # '[' d92cdeb4-409a-49a1-a041-7394288c5a60 '!=' d92cdeb4-409a-49a1-a041-7394288c5a60 ']' 00:19:09.381 05:16:28 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:09.381 05:16:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:09.381 05:16:28 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:09.381 05:16:28 -- bdev/bdev_raid.sh@511 -- # killprocess 76422 00:19:09.381 05:16:28 -- common/autotest_common.sh@926 -- # '[' -z 76422 ']' 00:19:09.381 05:16:28 -- common/autotest_common.sh@930 -- # kill -0 76422 00:19:09.381 05:16:28 -- common/autotest_common.sh@931 -- # uname 00:19:09.381 05:16:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:09.381 05:16:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76422 00:19:09.381 killing process with pid 76422 00:19:09.381 05:16:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:09.381 05:16:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:09.381 05:16:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76422' 00:19:09.381 05:16:28 -- common/autotest_common.sh@945 -- # kill 76422 00:19:09.381 [2024-07-26 05:16:28.269028] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:09.381 [2024-07-26 05:16:28.269106] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.381 05:16:28 -- common/autotest_common.sh@950 -- # wait 76422 00:19:09.381 [2024-07-26 05:16:28.269202] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.381 [2024-07-26 05:16:28.269218] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:19:09.640 [2024-07-26 05:16:28.582951] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:11.018 00:19:11.018 real 0m11.949s 00:19:11.018 user 0m20.000s 00:19:11.018 sys 0m1.669s 00:19:11.018 05:16:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:11.018 05:16:29 -- common/autotest_common.sh@10 -- # set +x 00:19:11.018 ************************************ 00:19:11.018 END TEST raid_superblock_test 00:19:11.018 ************************************ 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:11.018 05:16:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:11.018 05:16:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:11.018 05:16:29 -- common/autotest_common.sh@10 -- # set +x 00:19:11.018 ************************************ 00:19:11.018 START TEST raid_state_function_test 00:19:11.018 ************************************ 00:19:11.018 05:16:29 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=76724 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 76724' 00:19:11.018 Process raid pid: 76724 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 76724 /var/tmp/spdk-raid.sock 00:19:11.018 05:16:29 -- common/autotest_common.sh@819 -- # '[' -z 76724 ']' 00:19:11.018 05:16:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:11.018 05:16:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:11.018 05:16:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:11.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:11.018 05:16:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:11.018 05:16:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:11.018 05:16:29 -- common/autotest_common.sh@10 -- # set +x 00:19:11.018 [2024-07-26 05:16:29.819108] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:11.018 [2024-07-26 05:16:29.819273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.018 [2024-07-26 05:16:29.985453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.277 [2024-07-26 05:16:30.157792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.277 [2024-07-26 05:16:30.320900] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.846 05:16:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:11.846 05:16:30 -- common/autotest_common.sh@852 -- # return 0 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:11.846 [2024-07-26 05:16:30.930151] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:11.846 [2024-07-26 05:16:30.930234] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:11.846 [2024-07-26 05:16:30.930275] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:11.846 [2024-07-26 05:16:30.930294] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:11.846 [2024-07-26 05:16:30.930304] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:11.846 [2024-07-26 05:16:30.930320] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:11.846 [2024-07-26 05:16:30.930329] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:11.846 [2024-07-26 05:16:30.930343] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.846 05:16:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.126 05:16:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:12.126 "name": "Existed_Raid", 00:19:12.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.126 "strip_size_kb": 0, 00:19:12.126 "state": "configuring", 00:19:12.126 "raid_level": "raid1", 00:19:12.126 "superblock": false, 00:19:12.126 "num_base_bdevs": 4, 00:19:12.126 "num_base_bdevs_discovered": 0, 00:19:12.126 "num_base_bdevs_operational": 4, 00:19:12.126 "base_bdevs_list": [ 00:19:12.126 { 00:19:12.126 "name": "BaseBdev1", 00:19:12.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.126 "is_configured": false, 00:19:12.126 "data_offset": 0, 00:19:12.126 "data_size": 0 00:19:12.126 }, 00:19:12.126 { 00:19:12.126 "name": "BaseBdev2", 00:19:12.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.126 "is_configured": false, 00:19:12.126 "data_offset": 0, 00:19:12.126 "data_size": 0 00:19:12.126 }, 00:19:12.126 { 00:19:12.126 "name": "BaseBdev3", 00:19:12.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.126 "is_configured": false, 00:19:12.126 "data_offset": 0, 00:19:12.126 "data_size": 0 00:19:12.126 }, 00:19:12.126 { 00:19:12.126 "name": "BaseBdev4", 00:19:12.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.126 "is_configured": false, 00:19:12.126 "data_offset": 0, 00:19:12.126 "data_size": 0 00:19:12.126 } 00:19:12.126 ] 00:19:12.126 }' 00:19:12.126 05:16:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:12.126 05:16:31 -- common/autotest_common.sh@10 -- # set +x 00:19:12.694 05:16:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:12.694 [2024-07-26 05:16:31.710339] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.694 [2024-07-26 05:16:31.710401] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:19:12.694 05:16:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:12.953 [2024-07-26 05:16:31.906383] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.953 [2024-07-26 05:16:31.906471] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.953 [2024-07-26 05:16:31.906485] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.953 [2024-07-26 05:16:31.906500] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.953 [2024-07-26 05:16:31.906509] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:12.953 [2024-07-26 05:16:31.906521] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:12.953 [2024-07-26 05:16:31.906544] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:12.953 [2024-07-26 05:16:31.906558] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:12.953 05:16:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:13.212 [2024-07-26 05:16:32.125172] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.212 BaseBdev1 00:19:13.212 05:16:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:13.212 05:16:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:13.212 05:16:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:13.212 05:16:32 -- common/autotest_common.sh@889 -- # local i 00:19:13.212 05:16:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:13.212 05:16:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:13.212 05:16:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.471 05:16:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:13.730 [ 00:19:13.730 { 00:19:13.730 "name": "BaseBdev1", 00:19:13.730 "aliases": [ 00:19:13.730 "cb2ea1b0-95c5-410d-97b7-ad7dbdc93f3a" 00:19:13.730 ], 00:19:13.730 "product_name": "Malloc disk", 00:19:13.730 "block_size": 512, 00:19:13.730 "num_blocks": 65536, 00:19:13.730 "uuid": "cb2ea1b0-95c5-410d-97b7-ad7dbdc93f3a", 00:19:13.730 "assigned_rate_limits": { 00:19:13.730 "rw_ios_per_sec": 0, 00:19:13.730 "rw_mbytes_per_sec": 0, 00:19:13.730 "r_mbytes_per_sec": 0, 00:19:13.730 "w_mbytes_per_sec": 0 00:19:13.730 }, 00:19:13.730 "claimed": true, 00:19:13.730 "claim_type": "exclusive_write", 00:19:13.730 "zoned": false, 00:19:13.730 "supported_io_types": { 00:19:13.730 "read": true, 00:19:13.730 "write": true, 00:19:13.730 "unmap": true, 00:19:13.730 "write_zeroes": true, 00:19:13.730 "flush": true, 00:19:13.730 "reset": true, 00:19:13.730 "compare": false, 00:19:13.730 "compare_and_write": false, 00:19:13.730 "abort": true, 00:19:13.730 "nvme_admin": false, 00:19:13.730 "nvme_io": false 00:19:13.730 }, 00:19:13.730 "memory_domains": [ 00:19:13.730 { 00:19:13.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.730 "dma_device_type": 2 00:19:13.730 } 00:19:13.730 ], 00:19:13.730 "driver_specific": {} 00:19:13.730 } 00:19:13.730 ] 00:19:13.731 05:16:32 -- common/autotest_common.sh@895 -- # return 0 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.731 05:16:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.990 05:16:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.990 "name": "Existed_Raid", 00:19:13.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.990 "strip_size_kb": 0, 00:19:13.990 "state": "configuring", 00:19:13.990 "raid_level": "raid1", 00:19:13.990 "superblock": false, 00:19:13.990 "num_base_bdevs": 4, 00:19:13.990 "num_base_bdevs_discovered": 1, 00:19:13.990 "num_base_bdevs_operational": 4, 00:19:13.990 "base_bdevs_list": [ 00:19:13.990 { 00:19:13.990 "name": "BaseBdev1", 00:19:13.990 "uuid": "cb2ea1b0-95c5-410d-97b7-ad7dbdc93f3a", 00:19:13.990 "is_configured": true, 00:19:13.990 "data_offset": 0, 00:19:13.990 "data_size": 65536 00:19:13.990 }, 00:19:13.990 { 00:19:13.990 "name": "BaseBdev2", 00:19:13.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.990 "is_configured": false, 00:19:13.990 "data_offset": 0, 00:19:13.990 "data_size": 0 00:19:13.990 }, 00:19:13.990 { 00:19:13.990 "name": "BaseBdev3", 00:19:13.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.990 "is_configured": false, 00:19:13.990 "data_offset": 0, 00:19:13.990 "data_size": 0 00:19:13.990 }, 00:19:13.990 { 00:19:13.990 "name": "BaseBdev4", 00:19:13.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.990 "is_configured": false, 00:19:13.990 "data_offset": 0, 00:19:13.990 "data_size": 0 00:19:13.990 } 00:19:13.990 ] 00:19:13.990 }' 00:19:13.990 05:16:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.990 05:16:32 -- common/autotest_common.sh@10 -- # set +x 00:19:14.249 05:16:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:14.249 [2024-07-26 05:16:33.313467] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.249 [2024-07-26 05:16:33.313538] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:19:14.249 05:16:33 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:14.249 05:16:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:14.508 [2024-07-26 05:16:33.565623] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.508 [2024-07-26 05:16:33.567612] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.508 [2024-07-26 05:16:33.567691] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.508 [2024-07-26 05:16:33.567705] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:14.508 [2024-07-26 05:16:33.567719] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:14.508 [2024-07-26 05:16:33.567728] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:14.508 [2024-07-26 05:16:33.567743] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.508 05:16:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.767 05:16:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.767 "name": "Existed_Raid", 00:19:14.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.767 "strip_size_kb": 0, 00:19:14.767 "state": "configuring", 00:19:14.767 "raid_level": "raid1", 00:19:14.767 "superblock": false, 00:19:14.767 "num_base_bdevs": 4, 00:19:14.767 "num_base_bdevs_discovered": 1, 00:19:14.767 "num_base_bdevs_operational": 4, 00:19:14.767 "base_bdevs_list": [ 00:19:14.767 { 00:19:14.767 "name": "BaseBdev1", 00:19:14.767 "uuid": "cb2ea1b0-95c5-410d-97b7-ad7dbdc93f3a", 00:19:14.767 "is_configured": true, 00:19:14.767 "data_offset": 0, 00:19:14.767 "data_size": 65536 00:19:14.767 }, 00:19:14.767 { 00:19:14.767 "name": "BaseBdev2", 00:19:14.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.767 "is_configured": false, 00:19:14.767 "data_offset": 0, 00:19:14.767 "data_size": 0 00:19:14.767 }, 00:19:14.767 { 00:19:14.767 "name": "BaseBdev3", 00:19:14.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.767 "is_configured": false, 00:19:14.767 "data_offset": 0, 00:19:14.767 "data_size": 0 00:19:14.767 }, 00:19:14.767 { 00:19:14.767 "name": "BaseBdev4", 00:19:14.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.767 "is_configured": false, 00:19:14.767 "data_offset": 0, 00:19:14.767 "data_size": 0 00:19:14.767 } 00:19:14.767 ] 00:19:14.767 }' 00:19:14.767 05:16:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.767 05:16:33 -- common/autotest_common.sh@10 -- # set +x 00:19:15.026 05:16:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:15.285 [2024-07-26 05:16:34.364301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.285 BaseBdev2 00:19:15.285 05:16:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:15.285 05:16:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:15.285 05:16:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:15.285 05:16:34 -- common/autotest_common.sh@889 -- # local i 00:19:15.285 05:16:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:15.285 05:16:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:15.285 05:16:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:15.544 05:16:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:15.803 [ 00:19:15.803 { 00:19:15.803 "name": "BaseBdev2", 00:19:15.803 "aliases": [ 00:19:15.803 "1e74eec6-31da-43e6-9329-39e6f206120a" 00:19:15.803 ], 00:19:15.803 "product_name": "Malloc disk", 00:19:15.803 "block_size": 512, 00:19:15.803 "num_blocks": 65536, 00:19:15.803 "uuid": "1e74eec6-31da-43e6-9329-39e6f206120a", 00:19:15.803 "assigned_rate_limits": { 00:19:15.803 "rw_ios_per_sec": 0, 00:19:15.803 "rw_mbytes_per_sec": 0, 00:19:15.803 "r_mbytes_per_sec": 0, 00:19:15.803 "w_mbytes_per_sec": 0 00:19:15.803 }, 00:19:15.803 "claimed": true, 00:19:15.803 "claim_type": "exclusive_write", 00:19:15.803 "zoned": false, 00:19:15.803 "supported_io_types": { 00:19:15.803 "read": true, 00:19:15.803 "write": true, 00:19:15.803 "unmap": true, 00:19:15.803 "write_zeroes": true, 00:19:15.803 "flush": true, 00:19:15.803 "reset": true, 00:19:15.803 "compare": false, 00:19:15.803 "compare_and_write": false, 00:19:15.803 "abort": true, 00:19:15.803 "nvme_admin": false, 00:19:15.803 "nvme_io": false 00:19:15.803 }, 00:19:15.803 "memory_domains": [ 00:19:15.803 { 00:19:15.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.803 "dma_device_type": 2 00:19:15.803 } 00:19:15.803 ], 00:19:15.803 "driver_specific": {} 00:19:15.803 } 00:19:15.803 ] 00:19:15.803 05:16:34 -- common/autotest_common.sh@895 -- # return 0 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.803 05:16:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.062 05:16:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.062 "name": "Existed_Raid", 00:19:16.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.062 "strip_size_kb": 0, 00:19:16.062 "state": "configuring", 00:19:16.062 "raid_level": "raid1", 00:19:16.062 "superblock": false, 00:19:16.062 "num_base_bdevs": 4, 00:19:16.062 "num_base_bdevs_discovered": 2, 00:19:16.062 "num_base_bdevs_operational": 4, 00:19:16.062 "base_bdevs_list": [ 00:19:16.062 { 00:19:16.062 "name": "BaseBdev1", 00:19:16.062 "uuid": "cb2ea1b0-95c5-410d-97b7-ad7dbdc93f3a", 00:19:16.062 "is_configured": true, 00:19:16.062 "data_offset": 0, 00:19:16.062 "data_size": 65536 00:19:16.062 }, 00:19:16.062 { 00:19:16.062 "name": "BaseBdev2", 00:19:16.062 "uuid": "1e74eec6-31da-43e6-9329-39e6f206120a", 00:19:16.062 "is_configured": true, 00:19:16.062 "data_offset": 0, 00:19:16.062 "data_size": 65536 00:19:16.062 }, 00:19:16.062 { 00:19:16.062 "name": "BaseBdev3", 00:19:16.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.062 "is_configured": false, 00:19:16.062 "data_offset": 0, 00:19:16.062 "data_size": 0 00:19:16.062 }, 00:19:16.062 { 00:19:16.062 "name": "BaseBdev4", 00:19:16.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.062 "is_configured": false, 00:19:16.062 "data_offset": 0, 00:19:16.062 "data_size": 0 00:19:16.062 } 00:19:16.062 ] 00:19:16.062 }' 00:19:16.062 05:16:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.062 05:16:35 -- common/autotest_common.sh@10 -- # set +x 00:19:16.321 05:16:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:16.580 [2024-07-26 05:16:35.568282] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:16.580 BaseBdev3 00:19:16.580 05:16:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:16.580 05:16:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:16.581 05:16:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:16.581 05:16:35 -- common/autotest_common.sh@889 -- # local i 00:19:16.581 05:16:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:16.581 05:16:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:16.581 05:16:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:16.839 05:16:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:17.099 [ 00:19:17.099 { 00:19:17.099 "name": "BaseBdev3", 00:19:17.099 "aliases": [ 00:19:17.099 "fbad3b68-ac6b-4307-aa3c-a821260070f4" 00:19:17.099 ], 00:19:17.099 "product_name": "Malloc disk", 00:19:17.099 "block_size": 512, 00:19:17.099 "num_blocks": 65536, 00:19:17.099 "uuid": "fbad3b68-ac6b-4307-aa3c-a821260070f4", 00:19:17.099 "assigned_rate_limits": { 00:19:17.099 "rw_ios_per_sec": 0, 00:19:17.099 "rw_mbytes_per_sec": 0, 00:19:17.099 "r_mbytes_per_sec": 0, 00:19:17.099 "w_mbytes_per_sec": 0 00:19:17.099 }, 00:19:17.099 "claimed": true, 00:19:17.099 "claim_type": "exclusive_write", 00:19:17.099 "zoned": false, 00:19:17.099 "supported_io_types": { 00:19:17.099 "read": true, 00:19:17.099 "write": true, 00:19:17.099 "unmap": true, 00:19:17.099 "write_zeroes": true, 00:19:17.099 "flush": true, 00:19:17.099 "reset": true, 00:19:17.099 "compare": false, 00:19:17.099 "compare_and_write": false, 00:19:17.099 "abort": true, 00:19:17.099 "nvme_admin": false, 00:19:17.099 "nvme_io": false 00:19:17.099 }, 00:19:17.099 "memory_domains": [ 00:19:17.099 { 00:19:17.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.099 "dma_device_type": 2 00:19:17.099 } 00:19:17.099 ], 00:19:17.099 "driver_specific": {} 00:19:17.099 } 00:19:17.099 ] 00:19:17.099 05:16:36 -- common/autotest_common.sh@895 -- # return 0 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.099 05:16:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.358 05:16:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.358 "name": "Existed_Raid", 00:19:17.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.358 "strip_size_kb": 0, 00:19:17.358 "state": "configuring", 00:19:17.358 "raid_level": "raid1", 00:19:17.358 "superblock": false, 00:19:17.358 "num_base_bdevs": 4, 00:19:17.358 "num_base_bdevs_discovered": 3, 00:19:17.358 "num_base_bdevs_operational": 4, 00:19:17.358 "base_bdevs_list": [ 00:19:17.358 { 00:19:17.358 "name": "BaseBdev1", 00:19:17.358 "uuid": "cb2ea1b0-95c5-410d-97b7-ad7dbdc93f3a", 00:19:17.358 "is_configured": true, 00:19:17.358 "data_offset": 0, 00:19:17.358 "data_size": 65536 00:19:17.358 }, 00:19:17.358 { 00:19:17.358 "name": "BaseBdev2", 00:19:17.358 "uuid": "1e74eec6-31da-43e6-9329-39e6f206120a", 00:19:17.358 "is_configured": true, 00:19:17.358 "data_offset": 0, 00:19:17.358 "data_size": 65536 00:19:17.358 }, 00:19:17.358 { 00:19:17.358 "name": "BaseBdev3", 00:19:17.358 "uuid": "fbad3b68-ac6b-4307-aa3c-a821260070f4", 00:19:17.358 "is_configured": true, 00:19:17.358 "data_offset": 0, 00:19:17.358 "data_size": 65536 00:19:17.358 }, 00:19:17.358 { 00:19:17.358 "name": "BaseBdev4", 00:19:17.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.358 "is_configured": false, 00:19:17.358 "data_offset": 0, 00:19:17.358 "data_size": 0 00:19:17.358 } 00:19:17.358 ] 00:19:17.358 }' 00:19:17.358 05:16:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.358 05:16:36 -- common/autotest_common.sh@10 -- # set +x 00:19:17.617 05:16:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:17.617 [2024-07-26 05:16:36.709971] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:17.617 [2024-07-26 05:16:36.710086] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:19:17.617 [2024-07-26 05:16:36.710100] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:17.617 [2024-07-26 05:16:36.710220] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:19:17.617 [2024-07-26 05:16:36.710723] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:19:17.617 [2024-07-26 05:16:36.710743] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:19:17.617 [2024-07-26 05:16:36.711018] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.617 BaseBdev4 00:19:17.877 05:16:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:17.877 05:16:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:17.877 05:16:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:17.877 05:16:36 -- common/autotest_common.sh@889 -- # local i 00:19:17.877 05:16:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:17.877 05:16:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:17.877 05:16:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:17.877 05:16:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:18.136 [ 00:19:18.136 { 00:19:18.136 "name": "BaseBdev4", 00:19:18.136 "aliases": [ 00:19:18.136 "39d9c2c6-4149-4bf9-9af5-47367b614ad2" 00:19:18.136 ], 00:19:18.136 "product_name": "Malloc disk", 00:19:18.136 "block_size": 512, 00:19:18.136 "num_blocks": 65536, 00:19:18.136 "uuid": "39d9c2c6-4149-4bf9-9af5-47367b614ad2", 00:19:18.136 "assigned_rate_limits": { 00:19:18.136 "rw_ios_per_sec": 0, 00:19:18.136 "rw_mbytes_per_sec": 0, 00:19:18.136 "r_mbytes_per_sec": 0, 00:19:18.136 "w_mbytes_per_sec": 0 00:19:18.136 }, 00:19:18.136 "claimed": true, 00:19:18.136 "claim_type": "exclusive_write", 00:19:18.136 "zoned": false, 00:19:18.136 "supported_io_types": { 00:19:18.136 "read": true, 00:19:18.136 "write": true, 00:19:18.136 "unmap": true, 00:19:18.136 "write_zeroes": true, 00:19:18.136 "flush": true, 00:19:18.136 "reset": true, 00:19:18.136 "compare": false, 00:19:18.136 "compare_and_write": false, 00:19:18.136 "abort": true, 00:19:18.136 "nvme_admin": false, 00:19:18.136 "nvme_io": false 00:19:18.136 }, 00:19:18.136 "memory_domains": [ 00:19:18.136 { 00:19:18.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.136 "dma_device_type": 2 00:19:18.136 } 00:19:18.136 ], 00:19:18.136 "driver_specific": {} 00:19:18.136 } 00:19:18.136 ] 00:19:18.136 05:16:37 -- common/autotest_common.sh@895 -- # return 0 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.136 05:16:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.396 05:16:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:18.396 "name": "Existed_Raid", 00:19:18.396 "uuid": "22cf7964-e1d5-475c-b083-bbd094df0f0d", 00:19:18.396 "strip_size_kb": 0, 00:19:18.396 "state": "online", 00:19:18.396 "raid_level": "raid1", 00:19:18.396 "superblock": false, 00:19:18.396 "num_base_bdevs": 4, 00:19:18.396 "num_base_bdevs_discovered": 4, 00:19:18.396 "num_base_bdevs_operational": 4, 00:19:18.396 "base_bdevs_list": [ 00:19:18.396 { 00:19:18.396 "name": "BaseBdev1", 00:19:18.396 "uuid": "cb2ea1b0-95c5-410d-97b7-ad7dbdc93f3a", 00:19:18.396 "is_configured": true, 00:19:18.396 "data_offset": 0, 00:19:18.396 "data_size": 65536 00:19:18.396 }, 00:19:18.396 { 00:19:18.396 "name": "BaseBdev2", 00:19:18.396 "uuid": "1e74eec6-31da-43e6-9329-39e6f206120a", 00:19:18.396 "is_configured": true, 00:19:18.396 "data_offset": 0, 00:19:18.396 "data_size": 65536 00:19:18.396 }, 00:19:18.396 { 00:19:18.396 "name": "BaseBdev3", 00:19:18.396 "uuid": "fbad3b68-ac6b-4307-aa3c-a821260070f4", 00:19:18.396 "is_configured": true, 00:19:18.396 "data_offset": 0, 00:19:18.396 "data_size": 65536 00:19:18.396 }, 00:19:18.396 { 00:19:18.396 "name": "BaseBdev4", 00:19:18.396 "uuid": "39d9c2c6-4149-4bf9-9af5-47367b614ad2", 00:19:18.396 "is_configured": true, 00:19:18.396 "data_offset": 0, 00:19:18.396 "data_size": 65536 00:19:18.396 } 00:19:18.396 ] 00:19:18.396 }' 00:19:18.396 05:16:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:18.396 05:16:37 -- common/autotest_common.sh@10 -- # set +x 00:19:18.655 05:16:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:18.914 [2024-07-26 05:16:37.830491] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.914 05:16:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.173 05:16:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:19.173 "name": "Existed_Raid", 00:19:19.173 "uuid": "22cf7964-e1d5-475c-b083-bbd094df0f0d", 00:19:19.173 "strip_size_kb": 0, 00:19:19.173 "state": "online", 00:19:19.173 "raid_level": "raid1", 00:19:19.173 "superblock": false, 00:19:19.173 "num_base_bdevs": 4, 00:19:19.173 "num_base_bdevs_discovered": 3, 00:19:19.173 "num_base_bdevs_operational": 3, 00:19:19.173 "base_bdevs_list": [ 00:19:19.173 { 00:19:19.173 "name": null, 00:19:19.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.173 "is_configured": false, 00:19:19.173 "data_offset": 0, 00:19:19.173 "data_size": 65536 00:19:19.173 }, 00:19:19.173 { 00:19:19.173 "name": "BaseBdev2", 00:19:19.173 "uuid": "1e74eec6-31da-43e6-9329-39e6f206120a", 00:19:19.173 "is_configured": true, 00:19:19.173 "data_offset": 0, 00:19:19.173 "data_size": 65536 00:19:19.173 }, 00:19:19.173 { 00:19:19.173 "name": "BaseBdev3", 00:19:19.173 "uuid": "fbad3b68-ac6b-4307-aa3c-a821260070f4", 00:19:19.173 "is_configured": true, 00:19:19.173 "data_offset": 0, 00:19:19.173 "data_size": 65536 00:19:19.173 }, 00:19:19.173 { 00:19:19.173 "name": "BaseBdev4", 00:19:19.173 "uuid": "39d9c2c6-4149-4bf9-9af5-47367b614ad2", 00:19:19.173 "is_configured": true, 00:19:19.173 "data_offset": 0, 00:19:19.173 "data_size": 65536 00:19:19.173 } 00:19:19.173 ] 00:19:19.173 }' 00:19:19.173 05:16:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:19.173 05:16:38 -- common/autotest_common.sh@10 -- # set +x 00:19:19.431 05:16:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:19.431 05:16:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:19.431 05:16:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:19.431 05:16:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.689 05:16:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:19.689 05:16:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:19.689 05:16:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:19.948 [2024-07-26 05:16:38.946324] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:19.948 05:16:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:19.948 05:16:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:19.948 05:16:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.948 05:16:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:20.206 05:16:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:20.206 05:16:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:20.206 05:16:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:20.465 [2024-07-26 05:16:39.516793] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:20.724 05:16:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:20.724 05:16:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:20.724 05:16:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:20.724 05:16:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.724 05:16:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:20.724 05:16:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:20.724 05:16:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:20.983 [2024-07-26 05:16:40.021462] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:20.983 [2024-07-26 05:16:40.021521] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.983 [2024-07-26 05:16:40.021612] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.983 [2024-07-26 05:16:40.089817] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.983 [2024-07-26 05:16:40.089875] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:19:21.242 05:16:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:21.242 05:16:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:21.242 05:16:40 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.242 05:16:40 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:21.501 05:16:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:21.501 05:16:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:21.501 05:16:40 -- bdev/bdev_raid.sh@287 -- # killprocess 76724 00:19:21.501 05:16:40 -- common/autotest_common.sh@926 -- # '[' -z 76724 ']' 00:19:21.501 05:16:40 -- common/autotest_common.sh@930 -- # kill -0 76724 00:19:21.501 05:16:40 -- common/autotest_common.sh@931 -- # uname 00:19:21.501 05:16:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:21.501 05:16:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76724 00:19:21.501 killing process with pid 76724 00:19:21.501 05:16:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:21.501 05:16:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:21.501 05:16:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76724' 00:19:21.501 05:16:40 -- common/autotest_common.sh@945 -- # kill 76724 00:19:21.501 05:16:40 -- common/autotest_common.sh@950 -- # wait 76724 00:19:21.501 [2024-07-26 05:16:40.428678] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:21.501 [2024-07-26 05:16:40.428774] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:22.437 ************************************ 00:19:22.437 END TEST raid_state_function_test 00:19:22.437 ************************************ 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:22.437 00:19:22.437 real 0m11.694s 00:19:22.437 user 0m19.669s 00:19:22.437 sys 0m1.718s 00:19:22.437 05:16:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.437 05:16:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:22.437 05:16:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:22.437 05:16:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:22.437 05:16:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.437 ************************************ 00:19:22.437 START TEST raid_state_function_test_sb 00:19:22.437 ************************************ 00:19:22.437 05:16:41 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:22.437 Process raid pid: 77118 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=77118 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 77118' 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 77118 /var/tmp/spdk-raid.sock 00:19:22.437 05:16:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:22.437 05:16:41 -- common/autotest_common.sh@819 -- # '[' -z 77118 ']' 00:19:22.437 05:16:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:22.437 05:16:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:22.437 05:16:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:22.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:22.437 05:16:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:22.437 05:16:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.696 [2024-07-26 05:16:41.569250] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:22.696 [2024-07-26 05:16:41.569594] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.696 [2024-07-26 05:16:41.741063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.955 [2024-07-26 05:16:41.897310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.955 [2024-07-26 05:16:42.054159] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.523 05:16:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:23.523 05:16:42 -- common/autotest_common.sh@852 -- # return 0 00:19:23.523 05:16:42 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:23.782 [2024-07-26 05:16:42.680434] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:23.782 [2024-07-26 05:16:42.680506] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:23.782 [2024-07-26 05:16:42.680523] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:23.782 [2024-07-26 05:16:42.680536] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:23.782 [2024-07-26 05:16:42.680544] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:23.782 [2024-07-26 05:16:42.680555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:23.782 [2024-07-26 05:16:42.680562] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:23.782 [2024-07-26 05:16:42.680573] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.782 "name": "Existed_Raid", 00:19:23.782 "uuid": "d52d7ede-9eca-40bf-9fbd-db07e175a0f2", 00:19:23.782 "strip_size_kb": 0, 00:19:23.782 "state": "configuring", 00:19:23.782 "raid_level": "raid1", 00:19:23.782 "superblock": true, 00:19:23.782 "num_base_bdevs": 4, 00:19:23.782 "num_base_bdevs_discovered": 0, 00:19:23.782 "num_base_bdevs_operational": 4, 00:19:23.782 "base_bdevs_list": [ 00:19:23.782 { 00:19:23.782 "name": "BaseBdev1", 00:19:23.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.782 "is_configured": false, 00:19:23.782 "data_offset": 0, 00:19:23.782 "data_size": 0 00:19:23.782 }, 00:19:23.782 { 00:19:23.782 "name": "BaseBdev2", 00:19:23.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.782 "is_configured": false, 00:19:23.782 "data_offset": 0, 00:19:23.782 "data_size": 0 00:19:23.782 }, 00:19:23.782 { 00:19:23.782 "name": "BaseBdev3", 00:19:23.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.782 "is_configured": false, 00:19:23.782 "data_offset": 0, 00:19:23.782 "data_size": 0 00:19:23.782 }, 00:19:23.782 { 00:19:23.782 "name": "BaseBdev4", 00:19:23.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.782 "is_configured": false, 00:19:23.782 "data_offset": 0, 00:19:23.782 "data_size": 0 00:19:23.782 } 00:19:23.782 ] 00:19:23.782 }' 00:19:23.782 05:16:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.782 05:16:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.360 05:16:43 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:24.360 [2024-07-26 05:16:43.376488] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:24.360 [2024-07-26 05:16:43.376692] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:19:24.360 05:16:43 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:24.633 [2024-07-26 05:16:43.572588] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:24.633 [2024-07-26 05:16:43.572659] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:24.633 [2024-07-26 05:16:43.572673] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.633 [2024-07-26 05:16:43.572686] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.633 [2024-07-26 05:16:43.572694] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:24.633 [2024-07-26 05:16:43.572704] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:24.633 [2024-07-26 05:16:43.572711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:24.633 [2024-07-26 05:16:43.572722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:24.633 05:16:43 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:24.891 [2024-07-26 05:16:43.849584] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.891 BaseBdev1 00:19:24.891 05:16:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:24.891 05:16:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:24.891 05:16:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:24.891 05:16:43 -- common/autotest_common.sh@889 -- # local i 00:19:24.891 05:16:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:24.891 05:16:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:24.891 05:16:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:25.149 05:16:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:25.149 [ 00:19:25.149 { 00:19:25.149 "name": "BaseBdev1", 00:19:25.149 "aliases": [ 00:19:25.149 "510e73c8-2dc2-43e5-ac7a-171dd690fc28" 00:19:25.149 ], 00:19:25.149 "product_name": "Malloc disk", 00:19:25.149 "block_size": 512, 00:19:25.149 "num_blocks": 65536, 00:19:25.149 "uuid": "510e73c8-2dc2-43e5-ac7a-171dd690fc28", 00:19:25.149 "assigned_rate_limits": { 00:19:25.149 "rw_ios_per_sec": 0, 00:19:25.149 "rw_mbytes_per_sec": 0, 00:19:25.149 "r_mbytes_per_sec": 0, 00:19:25.150 "w_mbytes_per_sec": 0 00:19:25.150 }, 00:19:25.150 "claimed": true, 00:19:25.150 "claim_type": "exclusive_write", 00:19:25.150 "zoned": false, 00:19:25.150 "supported_io_types": { 00:19:25.150 "read": true, 00:19:25.150 "write": true, 00:19:25.150 "unmap": true, 00:19:25.150 "write_zeroes": true, 00:19:25.150 "flush": true, 00:19:25.150 "reset": true, 00:19:25.150 "compare": false, 00:19:25.150 "compare_and_write": false, 00:19:25.150 "abort": true, 00:19:25.150 "nvme_admin": false, 00:19:25.150 "nvme_io": false 00:19:25.150 }, 00:19:25.150 "memory_domains": [ 00:19:25.150 { 00:19:25.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.150 "dma_device_type": 2 00:19:25.150 } 00:19:25.150 ], 00:19:25.150 "driver_specific": {} 00:19:25.150 } 00:19:25.150 ] 00:19:25.150 05:16:44 -- common/autotest_common.sh@895 -- # return 0 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.150 05:16:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.408 05:16:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:25.408 "name": "Existed_Raid", 00:19:25.408 "uuid": "18b28e04-8c41-44b2-91fb-2930367aa3de", 00:19:25.408 "strip_size_kb": 0, 00:19:25.408 "state": "configuring", 00:19:25.408 "raid_level": "raid1", 00:19:25.408 "superblock": true, 00:19:25.408 "num_base_bdevs": 4, 00:19:25.408 "num_base_bdevs_discovered": 1, 00:19:25.408 "num_base_bdevs_operational": 4, 00:19:25.408 "base_bdevs_list": [ 00:19:25.408 { 00:19:25.408 "name": "BaseBdev1", 00:19:25.408 "uuid": "510e73c8-2dc2-43e5-ac7a-171dd690fc28", 00:19:25.408 "is_configured": true, 00:19:25.408 "data_offset": 2048, 00:19:25.408 "data_size": 63488 00:19:25.408 }, 00:19:25.408 { 00:19:25.408 "name": "BaseBdev2", 00:19:25.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.408 "is_configured": false, 00:19:25.408 "data_offset": 0, 00:19:25.408 "data_size": 0 00:19:25.408 }, 00:19:25.408 { 00:19:25.408 "name": "BaseBdev3", 00:19:25.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.408 "is_configured": false, 00:19:25.408 "data_offset": 0, 00:19:25.408 "data_size": 0 00:19:25.408 }, 00:19:25.408 { 00:19:25.408 "name": "BaseBdev4", 00:19:25.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.408 "is_configured": false, 00:19:25.408 "data_offset": 0, 00:19:25.408 "data_size": 0 00:19:25.408 } 00:19:25.408 ] 00:19:25.408 }' 00:19:25.408 05:16:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:25.408 05:16:44 -- common/autotest_common.sh@10 -- # set +x 00:19:25.975 05:16:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:25.975 [2024-07-26 05:16:44.949916] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.975 [2024-07-26 05:16:44.950193] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:19:25.975 05:16:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:25.975 05:16:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:26.232 05:16:45 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:26.490 BaseBdev1 00:19:26.490 05:16:45 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:26.490 05:16:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:26.490 05:16:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:26.490 05:16:45 -- common/autotest_common.sh@889 -- # local i 00:19:26.490 05:16:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:26.490 05:16:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:26.490 05:16:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:26.748 05:16:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:26.748 [ 00:19:26.748 { 00:19:26.748 "name": "BaseBdev1", 00:19:26.748 "aliases": [ 00:19:26.748 "8da523ba-9e10-4d93-b8f0-61f0d16a38a8" 00:19:26.748 ], 00:19:26.748 "product_name": "Malloc disk", 00:19:26.748 "block_size": 512, 00:19:26.748 "num_blocks": 65536, 00:19:26.748 "uuid": "8da523ba-9e10-4d93-b8f0-61f0d16a38a8", 00:19:26.748 "assigned_rate_limits": { 00:19:26.748 "rw_ios_per_sec": 0, 00:19:26.748 "rw_mbytes_per_sec": 0, 00:19:26.748 "r_mbytes_per_sec": 0, 00:19:26.748 "w_mbytes_per_sec": 0 00:19:26.748 }, 00:19:26.748 "claimed": false, 00:19:26.748 "zoned": false, 00:19:26.748 "supported_io_types": { 00:19:26.748 "read": true, 00:19:26.748 "write": true, 00:19:26.748 "unmap": true, 00:19:26.748 "write_zeroes": true, 00:19:26.748 "flush": true, 00:19:26.748 "reset": true, 00:19:26.748 "compare": false, 00:19:26.748 "compare_and_write": false, 00:19:26.748 "abort": true, 00:19:26.748 "nvme_admin": false, 00:19:26.748 "nvme_io": false 00:19:26.748 }, 00:19:26.748 "memory_domains": [ 00:19:26.748 { 00:19:26.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.748 "dma_device_type": 2 00:19:26.748 } 00:19:26.748 ], 00:19:26.748 "driver_specific": {} 00:19:26.748 } 00:19:26.748 ] 00:19:26.748 05:16:45 -- common/autotest_common.sh@895 -- # return 0 00:19:26.748 05:16:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:27.006 [2024-07-26 05:16:46.025944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.006 [2024-07-26 05:16:46.028003] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:27.006 [2024-07-26 05:16:46.028078] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:27.006 [2024-07-26 05:16:46.028094] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:27.006 [2024-07-26 05:16:46.028108] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:27.006 [2024-07-26 05:16:46.028117] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:27.006 [2024-07-26 05:16:46.028130] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.006 05:16:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.264 05:16:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.264 "name": "Existed_Raid", 00:19:27.264 "uuid": "639d0641-d67c-45be-a46d-94152270f9d2", 00:19:27.264 "strip_size_kb": 0, 00:19:27.264 "state": "configuring", 00:19:27.264 "raid_level": "raid1", 00:19:27.264 "superblock": true, 00:19:27.264 "num_base_bdevs": 4, 00:19:27.264 "num_base_bdevs_discovered": 1, 00:19:27.264 "num_base_bdevs_operational": 4, 00:19:27.264 "base_bdevs_list": [ 00:19:27.264 { 00:19:27.264 "name": "BaseBdev1", 00:19:27.264 "uuid": "8da523ba-9e10-4d93-b8f0-61f0d16a38a8", 00:19:27.264 "is_configured": true, 00:19:27.264 "data_offset": 2048, 00:19:27.264 "data_size": 63488 00:19:27.264 }, 00:19:27.264 { 00:19:27.264 "name": "BaseBdev2", 00:19:27.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.264 "is_configured": false, 00:19:27.264 "data_offset": 0, 00:19:27.264 "data_size": 0 00:19:27.264 }, 00:19:27.264 { 00:19:27.264 "name": "BaseBdev3", 00:19:27.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.264 "is_configured": false, 00:19:27.264 "data_offset": 0, 00:19:27.264 "data_size": 0 00:19:27.264 }, 00:19:27.264 { 00:19:27.264 "name": "BaseBdev4", 00:19:27.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.264 "is_configured": false, 00:19:27.264 "data_offset": 0, 00:19:27.264 "data_size": 0 00:19:27.264 } 00:19:27.264 ] 00:19:27.264 }' 00:19:27.264 05:16:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.264 05:16:46 -- common/autotest_common.sh@10 -- # set +x 00:19:27.521 05:16:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:27.778 [2024-07-26 05:16:46.805820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:27.778 BaseBdev2 00:19:27.779 05:16:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:27.779 05:16:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:27.779 05:16:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:27.779 05:16:46 -- common/autotest_common.sh@889 -- # local i 00:19:27.779 05:16:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:27.779 05:16:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:27.779 05:16:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:28.036 05:16:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:28.293 [ 00:19:28.293 { 00:19:28.293 "name": "BaseBdev2", 00:19:28.293 "aliases": [ 00:19:28.293 "25b0fa1e-0ac1-4107-9c7e-05747eb5a67a" 00:19:28.293 ], 00:19:28.293 "product_name": "Malloc disk", 00:19:28.293 "block_size": 512, 00:19:28.293 "num_blocks": 65536, 00:19:28.293 "uuid": "25b0fa1e-0ac1-4107-9c7e-05747eb5a67a", 00:19:28.293 "assigned_rate_limits": { 00:19:28.293 "rw_ios_per_sec": 0, 00:19:28.294 "rw_mbytes_per_sec": 0, 00:19:28.294 "r_mbytes_per_sec": 0, 00:19:28.294 "w_mbytes_per_sec": 0 00:19:28.294 }, 00:19:28.294 "claimed": true, 00:19:28.294 "claim_type": "exclusive_write", 00:19:28.294 "zoned": false, 00:19:28.294 "supported_io_types": { 00:19:28.294 "read": true, 00:19:28.294 "write": true, 00:19:28.294 "unmap": true, 00:19:28.294 "write_zeroes": true, 00:19:28.294 "flush": true, 00:19:28.294 "reset": true, 00:19:28.294 "compare": false, 00:19:28.294 "compare_and_write": false, 00:19:28.294 "abort": true, 00:19:28.294 "nvme_admin": false, 00:19:28.294 "nvme_io": false 00:19:28.294 }, 00:19:28.294 "memory_domains": [ 00:19:28.294 { 00:19:28.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.294 "dma_device_type": 2 00:19:28.294 } 00:19:28.294 ], 00:19:28.294 "driver_specific": {} 00:19:28.294 } 00:19:28.294 ] 00:19:28.294 05:16:47 -- common/autotest_common.sh@895 -- # return 0 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.294 05:16:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.552 05:16:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.552 "name": "Existed_Raid", 00:19:28.552 "uuid": "639d0641-d67c-45be-a46d-94152270f9d2", 00:19:28.552 "strip_size_kb": 0, 00:19:28.552 "state": "configuring", 00:19:28.552 "raid_level": "raid1", 00:19:28.552 "superblock": true, 00:19:28.552 "num_base_bdevs": 4, 00:19:28.552 "num_base_bdevs_discovered": 2, 00:19:28.552 "num_base_bdevs_operational": 4, 00:19:28.552 "base_bdevs_list": [ 00:19:28.552 { 00:19:28.552 "name": "BaseBdev1", 00:19:28.552 "uuid": "8da523ba-9e10-4d93-b8f0-61f0d16a38a8", 00:19:28.552 "is_configured": true, 00:19:28.552 "data_offset": 2048, 00:19:28.552 "data_size": 63488 00:19:28.552 }, 00:19:28.552 { 00:19:28.552 "name": "BaseBdev2", 00:19:28.552 "uuid": "25b0fa1e-0ac1-4107-9c7e-05747eb5a67a", 00:19:28.552 "is_configured": true, 00:19:28.552 "data_offset": 2048, 00:19:28.552 "data_size": 63488 00:19:28.552 }, 00:19:28.552 { 00:19:28.552 "name": "BaseBdev3", 00:19:28.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.552 "is_configured": false, 00:19:28.552 "data_offset": 0, 00:19:28.552 "data_size": 0 00:19:28.552 }, 00:19:28.552 { 00:19:28.552 "name": "BaseBdev4", 00:19:28.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.552 "is_configured": false, 00:19:28.552 "data_offset": 0, 00:19:28.552 "data_size": 0 00:19:28.552 } 00:19:28.552 ] 00:19:28.552 }' 00:19:28.552 05:16:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.552 05:16:47 -- common/autotest_common.sh@10 -- # set +x 00:19:28.811 05:16:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:29.069 [2024-07-26 05:16:47.941799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:29.069 BaseBdev3 00:19:29.069 05:16:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:29.069 05:16:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:29.069 05:16:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:29.069 05:16:47 -- common/autotest_common.sh@889 -- # local i 00:19:29.069 05:16:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:29.069 05:16:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:29.069 05:16:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:29.069 05:16:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:29.328 [ 00:19:29.328 { 00:19:29.328 "name": "BaseBdev3", 00:19:29.328 "aliases": [ 00:19:29.328 "285e4950-ce4a-48e9-885d-ca4e0033b957" 00:19:29.328 ], 00:19:29.328 "product_name": "Malloc disk", 00:19:29.328 "block_size": 512, 00:19:29.328 "num_blocks": 65536, 00:19:29.328 "uuid": "285e4950-ce4a-48e9-885d-ca4e0033b957", 00:19:29.328 "assigned_rate_limits": { 00:19:29.328 "rw_ios_per_sec": 0, 00:19:29.328 "rw_mbytes_per_sec": 0, 00:19:29.328 "r_mbytes_per_sec": 0, 00:19:29.328 "w_mbytes_per_sec": 0 00:19:29.328 }, 00:19:29.328 "claimed": true, 00:19:29.328 "claim_type": "exclusive_write", 00:19:29.328 "zoned": false, 00:19:29.328 "supported_io_types": { 00:19:29.328 "read": true, 00:19:29.328 "write": true, 00:19:29.328 "unmap": true, 00:19:29.328 "write_zeroes": true, 00:19:29.328 "flush": true, 00:19:29.328 "reset": true, 00:19:29.328 "compare": false, 00:19:29.328 "compare_and_write": false, 00:19:29.328 "abort": true, 00:19:29.328 "nvme_admin": false, 00:19:29.328 "nvme_io": false 00:19:29.328 }, 00:19:29.328 "memory_domains": [ 00:19:29.328 { 00:19:29.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.328 "dma_device_type": 2 00:19:29.328 } 00:19:29.328 ], 00:19:29.328 "driver_specific": {} 00:19:29.328 } 00:19:29.328 ] 00:19:29.328 05:16:48 -- common/autotest_common.sh@895 -- # return 0 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.328 05:16:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.587 05:16:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.587 "name": "Existed_Raid", 00:19:29.587 "uuid": "639d0641-d67c-45be-a46d-94152270f9d2", 00:19:29.587 "strip_size_kb": 0, 00:19:29.587 "state": "configuring", 00:19:29.587 "raid_level": "raid1", 00:19:29.587 "superblock": true, 00:19:29.587 "num_base_bdevs": 4, 00:19:29.587 "num_base_bdevs_discovered": 3, 00:19:29.587 "num_base_bdevs_operational": 4, 00:19:29.587 "base_bdevs_list": [ 00:19:29.587 { 00:19:29.587 "name": "BaseBdev1", 00:19:29.587 "uuid": "8da523ba-9e10-4d93-b8f0-61f0d16a38a8", 00:19:29.587 "is_configured": true, 00:19:29.587 "data_offset": 2048, 00:19:29.587 "data_size": 63488 00:19:29.587 }, 00:19:29.587 { 00:19:29.587 "name": "BaseBdev2", 00:19:29.587 "uuid": "25b0fa1e-0ac1-4107-9c7e-05747eb5a67a", 00:19:29.588 "is_configured": true, 00:19:29.588 "data_offset": 2048, 00:19:29.588 "data_size": 63488 00:19:29.588 }, 00:19:29.588 { 00:19:29.588 "name": "BaseBdev3", 00:19:29.588 "uuid": "285e4950-ce4a-48e9-885d-ca4e0033b957", 00:19:29.588 "is_configured": true, 00:19:29.588 "data_offset": 2048, 00:19:29.588 "data_size": 63488 00:19:29.588 }, 00:19:29.588 { 00:19:29.588 "name": "BaseBdev4", 00:19:29.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.588 "is_configured": false, 00:19:29.588 "data_offset": 0, 00:19:29.588 "data_size": 0 00:19:29.588 } 00:19:29.588 ] 00:19:29.588 }' 00:19:29.588 05:16:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.588 05:16:48 -- common/autotest_common.sh@10 -- # set +x 00:19:29.846 05:16:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:30.106 [2024-07-26 05:16:49.082607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:30.106 [2024-07-26 05:16:49.083135] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:19:30.106 [2024-07-26 05:16:49.083275] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:30.106 [2024-07-26 05:16:49.083422] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:30.106 [2024-07-26 05:16:49.083934] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:19:30.106 BaseBdev4 00:19:30.106 [2024-07-26 05:16:49.084149] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:19:30.106 [2024-07-26 05:16:49.084551] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.106 05:16:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:30.106 05:16:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:30.106 05:16:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:30.106 05:16:49 -- common/autotest_common.sh@889 -- # local i 00:19:30.106 05:16:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:30.106 05:16:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:30.106 05:16:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.365 05:16:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:30.623 [ 00:19:30.623 { 00:19:30.623 "name": "BaseBdev4", 00:19:30.623 "aliases": [ 00:19:30.623 "3f47dda0-f0ae-4037-9161-a4b43fa90318" 00:19:30.623 ], 00:19:30.623 "product_name": "Malloc disk", 00:19:30.623 "block_size": 512, 00:19:30.623 "num_blocks": 65536, 00:19:30.623 "uuid": "3f47dda0-f0ae-4037-9161-a4b43fa90318", 00:19:30.623 "assigned_rate_limits": { 00:19:30.623 "rw_ios_per_sec": 0, 00:19:30.623 "rw_mbytes_per_sec": 0, 00:19:30.623 "r_mbytes_per_sec": 0, 00:19:30.623 "w_mbytes_per_sec": 0 00:19:30.623 }, 00:19:30.623 "claimed": true, 00:19:30.623 "claim_type": "exclusive_write", 00:19:30.623 "zoned": false, 00:19:30.623 "supported_io_types": { 00:19:30.623 "read": true, 00:19:30.623 "write": true, 00:19:30.623 "unmap": true, 00:19:30.623 "write_zeroes": true, 00:19:30.623 "flush": true, 00:19:30.623 "reset": true, 00:19:30.623 "compare": false, 00:19:30.623 "compare_and_write": false, 00:19:30.623 "abort": true, 00:19:30.623 "nvme_admin": false, 00:19:30.623 "nvme_io": false 00:19:30.623 }, 00:19:30.623 "memory_domains": [ 00:19:30.623 { 00:19:30.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.623 "dma_device_type": 2 00:19:30.623 } 00:19:30.623 ], 00:19:30.623 "driver_specific": {} 00:19:30.623 } 00:19:30.623 ] 00:19:30.623 05:16:49 -- common/autotest_common.sh@895 -- # return 0 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:30.623 "name": "Existed_Raid", 00:19:30.623 "uuid": "639d0641-d67c-45be-a46d-94152270f9d2", 00:19:30.623 "strip_size_kb": 0, 00:19:30.623 "state": "online", 00:19:30.623 "raid_level": "raid1", 00:19:30.623 "superblock": true, 00:19:30.623 "num_base_bdevs": 4, 00:19:30.623 "num_base_bdevs_discovered": 4, 00:19:30.623 "num_base_bdevs_operational": 4, 00:19:30.623 "base_bdevs_list": [ 00:19:30.623 { 00:19:30.623 "name": "BaseBdev1", 00:19:30.623 "uuid": "8da523ba-9e10-4d93-b8f0-61f0d16a38a8", 00:19:30.623 "is_configured": true, 00:19:30.623 "data_offset": 2048, 00:19:30.623 "data_size": 63488 00:19:30.623 }, 00:19:30.623 { 00:19:30.623 "name": "BaseBdev2", 00:19:30.623 "uuid": "25b0fa1e-0ac1-4107-9c7e-05747eb5a67a", 00:19:30.623 "is_configured": true, 00:19:30.623 "data_offset": 2048, 00:19:30.623 "data_size": 63488 00:19:30.623 }, 00:19:30.623 { 00:19:30.623 "name": "BaseBdev3", 00:19:30.623 "uuid": "285e4950-ce4a-48e9-885d-ca4e0033b957", 00:19:30.623 "is_configured": true, 00:19:30.623 "data_offset": 2048, 00:19:30.623 "data_size": 63488 00:19:30.623 }, 00:19:30.623 { 00:19:30.623 "name": "BaseBdev4", 00:19:30.623 "uuid": "3f47dda0-f0ae-4037-9161-a4b43fa90318", 00:19:30.623 "is_configured": true, 00:19:30.623 "data_offset": 2048, 00:19:30.623 "data_size": 63488 00:19:30.623 } 00:19:30.623 ] 00:19:30.623 }' 00:19:30.623 05:16:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:30.623 05:16:49 -- common/autotest_common.sh@10 -- # set +x 00:19:30.882 05:16:49 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:31.141 [2024-07-26 05:16:50.151162] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.141 05:16:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.399 05:16:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:31.399 "name": "Existed_Raid", 00:19:31.399 "uuid": "639d0641-d67c-45be-a46d-94152270f9d2", 00:19:31.399 "strip_size_kb": 0, 00:19:31.399 "state": "online", 00:19:31.399 "raid_level": "raid1", 00:19:31.399 "superblock": true, 00:19:31.399 "num_base_bdevs": 4, 00:19:31.399 "num_base_bdevs_discovered": 3, 00:19:31.399 "num_base_bdevs_operational": 3, 00:19:31.399 "base_bdevs_list": [ 00:19:31.399 { 00:19:31.399 "name": null, 00:19:31.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.399 "is_configured": false, 00:19:31.399 "data_offset": 2048, 00:19:31.399 "data_size": 63488 00:19:31.399 }, 00:19:31.399 { 00:19:31.399 "name": "BaseBdev2", 00:19:31.399 "uuid": "25b0fa1e-0ac1-4107-9c7e-05747eb5a67a", 00:19:31.399 "is_configured": true, 00:19:31.399 "data_offset": 2048, 00:19:31.399 "data_size": 63488 00:19:31.399 }, 00:19:31.399 { 00:19:31.399 "name": "BaseBdev3", 00:19:31.399 "uuid": "285e4950-ce4a-48e9-885d-ca4e0033b957", 00:19:31.399 "is_configured": true, 00:19:31.399 "data_offset": 2048, 00:19:31.399 "data_size": 63488 00:19:31.399 }, 00:19:31.399 { 00:19:31.399 "name": "BaseBdev4", 00:19:31.399 "uuid": "3f47dda0-f0ae-4037-9161-a4b43fa90318", 00:19:31.399 "is_configured": true, 00:19:31.399 "data_offset": 2048, 00:19:31.399 "data_size": 63488 00:19:31.399 } 00:19:31.399 ] 00:19:31.399 }' 00:19:31.399 05:16:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:31.399 05:16:50 -- common/autotest_common.sh@10 -- # set +x 00:19:31.966 05:16:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:31.966 05:16:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:31.966 05:16:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:31.966 05:16:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.966 05:16:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:31.966 05:16:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:31.966 05:16:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:32.225 [2024-07-26 05:16:51.253835] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:32.483 05:16:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:32.483 05:16:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:32.483 05:16:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.483 05:16:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:32.742 05:16:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:32.742 05:16:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:32.742 05:16:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:32.742 [2024-07-26 05:16:51.771042] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:33.000 05:16:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:33.000 05:16:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:33.000 05:16:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:33.000 05:16:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.000 05:16:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:33.000 05:16:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.000 05:16:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:33.259 [2024-07-26 05:16:52.247656] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:33.259 [2024-07-26 05:16:52.247718] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.259 [2024-07-26 05:16:52.247781] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.259 [2024-07-26 05:16:52.314633] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.259 [2024-07-26 05:16:52.314897] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:19:33.259 05:16:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:33.259 05:16:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:33.259 05:16:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.259 05:16:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:33.517 05:16:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:33.517 05:16:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:33.517 05:16:52 -- bdev/bdev_raid.sh@287 -- # killprocess 77118 00:19:33.517 05:16:52 -- common/autotest_common.sh@926 -- # '[' -z 77118 ']' 00:19:33.517 05:16:52 -- common/autotest_common.sh@930 -- # kill -0 77118 00:19:33.517 05:16:52 -- common/autotest_common.sh@931 -- # uname 00:19:33.517 05:16:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:33.517 05:16:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77118 00:19:33.517 05:16:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:33.517 killing process with pid 77118 00:19:33.517 05:16:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:33.517 05:16:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77118' 00:19:33.517 05:16:52 -- common/autotest_common.sh@945 -- # kill 77118 00:19:33.517 [2024-07-26 05:16:52.618334] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.517 05:16:52 -- common/autotest_common.sh@950 -- # wait 77118 00:19:33.517 [2024-07-26 05:16:52.618444] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:34.891 00:19:34.891 real 0m12.091s 00:19:34.891 user 0m20.332s 00:19:34.891 sys 0m1.768s 00:19:34.891 05:16:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.891 ************************************ 00:19:34.891 END TEST raid_state_function_test_sb 00:19:34.891 ************************************ 00:19:34.891 05:16:53 -- common/autotest_common.sh@10 -- # set +x 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:34.891 05:16:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:34.891 05:16:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:34.891 05:16:53 -- common/autotest_common.sh@10 -- # set +x 00:19:34.891 ************************************ 00:19:34.891 START TEST raid_superblock_test 00:19:34.891 ************************************ 00:19:34.891 05:16:53 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:19:34.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@357 -- # raid_pid=77515 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@358 -- # waitforlisten 77515 /var/tmp/spdk-raid.sock 00:19:34.891 05:16:53 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:34.891 05:16:53 -- common/autotest_common.sh@819 -- # '[' -z 77515 ']' 00:19:34.891 05:16:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:34.891 05:16:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:34.891 05:16:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:34.891 05:16:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:34.891 05:16:53 -- common/autotest_common.sh@10 -- # set +x 00:19:34.891 [2024-07-26 05:16:53.695160] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:34.891 [2024-07-26 05:16:53.695296] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77515 ] 00:19:34.891 [2024-07-26 05:16:53.847743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.150 [2024-07-26 05:16:54.012373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.150 [2024-07-26 05:16:54.163320] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.717 05:16:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:35.717 05:16:54 -- common/autotest_common.sh@852 -- # return 0 00:19:35.717 05:16:54 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:35.717 05:16:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:35.717 05:16:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:35.717 05:16:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:35.717 05:16:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:35.717 05:16:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:35.717 05:16:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:35.717 05:16:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:35.717 05:16:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:35.975 malloc1 00:19:35.975 05:16:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:35.975 [2024-07-26 05:16:55.076939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:35.975 [2024-07-26 05:16:55.077193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.975 [2024-07-26 05:16:55.077362] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:19:35.975 [2024-07-26 05:16:55.077467] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.975 [2024-07-26 05:16:55.079681] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.975 [2024-07-26 05:16:55.079838] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:35.975 pt1 00:19:36.233 05:16:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:36.233 05:16:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:36.233 05:16:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:36.233 05:16:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:36.233 05:16:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:36.233 05:16:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:36.233 05:16:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:36.233 05:16:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:36.233 05:16:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:36.233 malloc2 00:19:36.233 05:16:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.492 [2024-07-26 05:16:55.575354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.492 [2024-07-26 05:16:55.575614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.492 [2024-07-26 05:16:55.575751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:19:36.492 [2024-07-26 05:16:55.575861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.492 [2024-07-26 05:16:55.578057] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.492 [2024-07-26 05:16:55.578095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.492 pt2 00:19:36.492 05:16:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:36.492 05:16:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:36.492 05:16:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:36.492 05:16:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:36.492 05:16:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:36.492 05:16:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:36.492 05:16:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:36.492 05:16:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:36.492 05:16:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:36.751 malloc3 00:19:36.751 05:16:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:37.009 [2024-07-26 05:16:55.988283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:37.009 [2024-07-26 05:16:55.988367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.009 [2024-07-26 05:16:55.988399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:19:37.009 [2024-07-26 05:16:55.988428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.009 [2024-07-26 05:16:55.990552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.009 [2024-07-26 05:16:55.990815] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:37.009 pt3 00:19:37.009 05:16:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:37.009 05:16:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:37.009 05:16:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:37.009 05:16:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:37.009 05:16:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:37.009 05:16:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:37.009 05:16:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:37.009 05:16:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:37.009 05:16:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:37.267 malloc4 00:19:37.267 05:16:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:37.526 [2024-07-26 05:16:56.467049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:37.526 [2024-07-26 05:16:56.467365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.526 [2024-07-26 05:16:56.467569] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:19:37.526 [2024-07-26 05:16:56.467718] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.526 [2024-07-26 05:16:56.470621] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.526 [2024-07-26 05:16:56.470809] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:37.526 pt4 00:19:37.526 05:16:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:37.526 05:16:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:37.526 05:16:56 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:37.785 [2024-07-26 05:16:56.703401] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:37.785 [2024-07-26 05:16:56.705814] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.785 [2024-07-26 05:16:56.705996] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:37.785 [2024-07-26 05:16:56.706114] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:37.785 [2024-07-26 05:16:56.706455] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:19:37.785 [2024-07-26 05:16:56.706480] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:37.785 [2024-07-26 05:16:56.706628] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:19:37.785 [2024-07-26 05:16:56.707198] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:19:37.785 [2024-07-26 05:16:56.707219] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:19:37.785 [2024-07-26 05:16:56.707482] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.785 05:16:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.046 05:16:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:38.046 "name": "raid_bdev1", 00:19:38.046 "uuid": "86675f86-900c-401c-8892-958cc63b46fa", 00:19:38.046 "strip_size_kb": 0, 00:19:38.046 "state": "online", 00:19:38.046 "raid_level": "raid1", 00:19:38.046 "superblock": true, 00:19:38.046 "num_base_bdevs": 4, 00:19:38.046 "num_base_bdevs_discovered": 4, 00:19:38.046 "num_base_bdevs_operational": 4, 00:19:38.046 "base_bdevs_list": [ 00:19:38.046 { 00:19:38.046 "name": "pt1", 00:19:38.046 "uuid": "12712af2-2d27-56b9-9bb9-090860364a3c", 00:19:38.046 "is_configured": true, 00:19:38.046 "data_offset": 2048, 00:19:38.046 "data_size": 63488 00:19:38.046 }, 00:19:38.046 { 00:19:38.046 "name": "pt2", 00:19:38.046 "uuid": "863aa765-b31c-568a-a7b3-2e0d646000dd", 00:19:38.046 "is_configured": true, 00:19:38.046 "data_offset": 2048, 00:19:38.046 "data_size": 63488 00:19:38.046 }, 00:19:38.046 { 00:19:38.046 "name": "pt3", 00:19:38.046 "uuid": "68ca2bc3-ce64-52ca-8c36-d48c6fe35287", 00:19:38.046 "is_configured": true, 00:19:38.046 "data_offset": 2048, 00:19:38.046 "data_size": 63488 00:19:38.046 }, 00:19:38.046 { 00:19:38.046 "name": "pt4", 00:19:38.046 "uuid": "4b794df1-b14b-5b71-83cb-447a73c6736d", 00:19:38.046 "is_configured": true, 00:19:38.046 "data_offset": 2048, 00:19:38.046 "data_size": 63488 00:19:38.046 } 00:19:38.046 ] 00:19:38.046 }' 00:19:38.046 05:16:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:38.046 05:16:56 -- common/autotest_common.sh@10 -- # set +x 00:19:38.329 05:16:57 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:38.329 05:16:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:38.596 [2024-07-26 05:16:57.451951] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.596 05:16:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=86675f86-900c-401c-8892-958cc63b46fa 00:19:38.596 05:16:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 86675f86-900c-401c-8892-958cc63b46fa ']' 00:19:38.596 05:16:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:38.855 [2024-07-26 05:16:57.707745] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.855 [2024-07-26 05:16:57.707784] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.855 [2024-07-26 05:16:57.707862] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.855 [2024-07-26 05:16:57.707967] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.855 [2024-07-26 05:16:57.707981] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:19:38.855 05:16:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:38.855 05:16:57 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.855 05:16:57 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:38.855 05:16:57 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:38.855 05:16:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.855 05:16:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:39.113 05:16:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:39.113 05:16:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:39.372 05:16:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:39.372 05:16:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:39.631 05:16:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:39.631 05:16:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:39.631 05:16:58 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:39.631 05:16:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:40.198 05:16:59 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:40.199 05:16:59 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:40.199 05:16:59 -- common/autotest_common.sh@640 -- # local es=0 00:19:40.199 05:16:59 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:40.199 05:16:59 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.199 05:16:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:40.199 05:16:59 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.199 05:16:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:40.199 05:16:59 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.199 05:16:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:40.199 05:16:59 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.199 05:16:59 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:40.199 05:16:59 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:40.199 [2024-07-26 05:16:59.252371] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:40.199 [2024-07-26 05:16:59.254776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:40.199 [2024-07-26 05:16:59.254859] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:40.199 [2024-07-26 05:16:59.254919] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:40.199 [2024-07-26 05:16:59.255029] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:40.199 [2024-07-26 05:16:59.255151] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:40.199 [2024-07-26 05:16:59.255204] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:40.199 [2024-07-26 05:16:59.255232] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:40.199 [2024-07-26 05:16:59.255254] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.199 [2024-07-26 05:16:59.255267] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:19:40.199 request: 00:19:40.199 { 00:19:40.199 "name": "raid_bdev1", 00:19:40.199 "raid_level": "raid1", 00:19:40.199 "base_bdevs": [ 00:19:40.199 "malloc1", 00:19:40.199 "malloc2", 00:19:40.199 "malloc3", 00:19:40.199 "malloc4" 00:19:40.199 ], 00:19:40.199 "superblock": false, 00:19:40.199 "method": "bdev_raid_create", 00:19:40.199 "req_id": 1 00:19:40.199 } 00:19:40.199 Got JSON-RPC error response 00:19:40.199 response: 00:19:40.199 { 00:19:40.199 "code": -17, 00:19:40.199 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:40.199 } 00:19:40.199 05:16:59 -- common/autotest_common.sh@643 -- # es=1 00:19:40.199 05:16:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:40.199 05:16:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:40.199 05:16:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:40.199 05:16:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.199 05:16:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:40.458 05:16:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:40.458 05:16:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:40.458 05:16:59 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:40.716 [2024-07-26 05:16:59.708356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:40.716 [2024-07-26 05:16:59.708460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.716 [2024-07-26 05:16:59.708492] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:19:40.716 [2024-07-26 05:16:59.708505] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.716 [2024-07-26 05:16:59.710959] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.716 [2024-07-26 05:16:59.711213] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:40.716 [2024-07-26 05:16:59.711330] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:40.716 [2024-07-26 05:16:59.711406] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:40.716 pt1 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.716 05:16:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.974 05:16:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.974 "name": "raid_bdev1", 00:19:40.974 "uuid": "86675f86-900c-401c-8892-958cc63b46fa", 00:19:40.974 "strip_size_kb": 0, 00:19:40.974 "state": "configuring", 00:19:40.974 "raid_level": "raid1", 00:19:40.974 "superblock": true, 00:19:40.974 "num_base_bdevs": 4, 00:19:40.974 "num_base_bdevs_discovered": 1, 00:19:40.974 "num_base_bdevs_operational": 4, 00:19:40.974 "base_bdevs_list": [ 00:19:40.974 { 00:19:40.974 "name": "pt1", 00:19:40.974 "uuid": "12712af2-2d27-56b9-9bb9-090860364a3c", 00:19:40.974 "is_configured": true, 00:19:40.974 "data_offset": 2048, 00:19:40.974 "data_size": 63488 00:19:40.974 }, 00:19:40.974 { 00:19:40.974 "name": null, 00:19:40.974 "uuid": "863aa765-b31c-568a-a7b3-2e0d646000dd", 00:19:40.974 "is_configured": false, 00:19:40.974 "data_offset": 2048, 00:19:40.974 "data_size": 63488 00:19:40.974 }, 00:19:40.974 { 00:19:40.974 "name": null, 00:19:40.974 "uuid": "68ca2bc3-ce64-52ca-8c36-d48c6fe35287", 00:19:40.974 "is_configured": false, 00:19:40.974 "data_offset": 2048, 00:19:40.974 "data_size": 63488 00:19:40.974 }, 00:19:40.974 { 00:19:40.974 "name": null, 00:19:40.974 "uuid": "4b794df1-b14b-5b71-83cb-447a73c6736d", 00:19:40.974 "is_configured": false, 00:19:40.974 "data_offset": 2048, 00:19:40.974 "data_size": 63488 00:19:40.974 } 00:19:40.974 ] 00:19:40.974 }' 00:19:40.974 05:16:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.974 05:16:59 -- common/autotest_common.sh@10 -- # set +x 00:19:41.231 05:17:00 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:41.231 05:17:00 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:41.490 [2024-07-26 05:17:00.404612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:41.490 [2024-07-26 05:17:00.404698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.490 [2024-07-26 05:17:00.404732] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:19:41.490 [2024-07-26 05:17:00.404744] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.490 [2024-07-26 05:17:00.405214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.490 [2024-07-26 05:17:00.405237] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:41.490 [2024-07-26 05:17:00.405328] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:41.490 [2024-07-26 05:17:00.405370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:41.490 pt2 00:19:41.490 05:17:00 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:41.749 [2024-07-26 05:17:00.660684] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.749 05:17:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.008 05:17:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.008 "name": "raid_bdev1", 00:19:42.008 "uuid": "86675f86-900c-401c-8892-958cc63b46fa", 00:19:42.008 "strip_size_kb": 0, 00:19:42.008 "state": "configuring", 00:19:42.008 "raid_level": "raid1", 00:19:42.008 "superblock": true, 00:19:42.008 "num_base_bdevs": 4, 00:19:42.008 "num_base_bdevs_discovered": 1, 00:19:42.008 "num_base_bdevs_operational": 4, 00:19:42.008 "base_bdevs_list": [ 00:19:42.008 { 00:19:42.008 "name": "pt1", 00:19:42.008 "uuid": "12712af2-2d27-56b9-9bb9-090860364a3c", 00:19:42.008 "is_configured": true, 00:19:42.008 "data_offset": 2048, 00:19:42.008 "data_size": 63488 00:19:42.008 }, 00:19:42.008 { 00:19:42.008 "name": null, 00:19:42.008 "uuid": "863aa765-b31c-568a-a7b3-2e0d646000dd", 00:19:42.008 "is_configured": false, 00:19:42.008 "data_offset": 2048, 00:19:42.008 "data_size": 63488 00:19:42.008 }, 00:19:42.008 { 00:19:42.008 "name": null, 00:19:42.008 "uuid": "68ca2bc3-ce64-52ca-8c36-d48c6fe35287", 00:19:42.008 "is_configured": false, 00:19:42.008 "data_offset": 2048, 00:19:42.008 "data_size": 63488 00:19:42.008 }, 00:19:42.008 { 00:19:42.008 "name": null, 00:19:42.008 "uuid": "4b794df1-b14b-5b71-83cb-447a73c6736d", 00:19:42.008 "is_configured": false, 00:19:42.008 "data_offset": 2048, 00:19:42.008 "data_size": 63488 00:19:42.008 } 00:19:42.008 ] 00:19:42.008 }' 00:19:42.008 05:17:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.008 05:17:00 -- common/autotest_common.sh@10 -- # set +x 00:19:42.268 05:17:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:42.268 05:17:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:42.268 05:17:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:42.268 [2024-07-26 05:17:01.352777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:42.268 [2024-07-26 05:17:01.352861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.268 [2024-07-26 05:17:01.352887] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:19:42.268 [2024-07-26 05:17:01.352900] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.268 [2024-07-26 05:17:01.353398] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.268 [2024-07-26 05:17:01.353426] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:42.268 [2024-07-26 05:17:01.353527] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:42.268 [2024-07-26 05:17:01.353559] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:42.268 pt2 00:19:42.268 05:17:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:42.268 05:17:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:42.268 05:17:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:42.527 [2024-07-26 05:17:01.616974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:42.527 [2024-07-26 05:17:01.617111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.527 [2024-07-26 05:17:01.617172] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:19:42.527 [2024-07-26 05:17:01.617188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.527 [2024-07-26 05:17:01.617716] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.527 [2024-07-26 05:17:01.617753] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:42.527 [2024-07-26 05:17:01.617878] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:42.527 [2024-07-26 05:17:01.617909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:42.527 pt3 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:42.792 [2024-07-26 05:17:01.853102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:42.792 [2024-07-26 05:17:01.853240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.792 [2024-07-26 05:17:01.853275] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:19:42.792 [2024-07-26 05:17:01.853291] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.792 [2024-07-26 05:17:01.854250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.792 [2024-07-26 05:17:01.854478] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:42.792 [2024-07-26 05:17:01.854600] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:42.792 [2024-07-26 05:17:01.854660] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:42.792 [2024-07-26 05:17:01.854877] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:19:42.792 [2024-07-26 05:17:01.854896] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:42.792 [2024-07-26 05:17:01.855006] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:42.792 [2024-07-26 05:17:01.855387] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:19:42.792 [2024-07-26 05:17:01.855402] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:19:42.792 [2024-07-26 05:17:01.855580] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.792 pt4 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.792 05:17:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.059 05:17:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:43.059 "name": "raid_bdev1", 00:19:43.059 "uuid": "86675f86-900c-401c-8892-958cc63b46fa", 00:19:43.059 "strip_size_kb": 0, 00:19:43.059 "state": "online", 00:19:43.059 "raid_level": "raid1", 00:19:43.059 "superblock": true, 00:19:43.059 "num_base_bdevs": 4, 00:19:43.059 "num_base_bdevs_discovered": 4, 00:19:43.059 "num_base_bdevs_operational": 4, 00:19:43.059 "base_bdevs_list": [ 00:19:43.059 { 00:19:43.059 "name": "pt1", 00:19:43.059 "uuid": "12712af2-2d27-56b9-9bb9-090860364a3c", 00:19:43.059 "is_configured": true, 00:19:43.059 "data_offset": 2048, 00:19:43.059 "data_size": 63488 00:19:43.059 }, 00:19:43.059 { 00:19:43.059 "name": "pt2", 00:19:43.059 "uuid": "863aa765-b31c-568a-a7b3-2e0d646000dd", 00:19:43.059 "is_configured": true, 00:19:43.059 "data_offset": 2048, 00:19:43.059 "data_size": 63488 00:19:43.059 }, 00:19:43.059 { 00:19:43.059 "name": "pt3", 00:19:43.059 "uuid": "68ca2bc3-ce64-52ca-8c36-d48c6fe35287", 00:19:43.059 "is_configured": true, 00:19:43.059 "data_offset": 2048, 00:19:43.059 "data_size": 63488 00:19:43.059 }, 00:19:43.059 { 00:19:43.059 "name": "pt4", 00:19:43.059 "uuid": "4b794df1-b14b-5b71-83cb-447a73c6736d", 00:19:43.059 "is_configured": true, 00:19:43.059 "data_offset": 2048, 00:19:43.059 "data_size": 63488 00:19:43.059 } 00:19:43.059 ] 00:19:43.059 }' 00:19:43.059 05:17:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:43.059 05:17:02 -- common/autotest_common.sh@10 -- # set +x 00:19:43.626 05:17:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:43.626 05:17:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:43.626 [2024-07-26 05:17:02.697618] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.626 05:17:02 -- bdev/bdev_raid.sh@430 -- # '[' 86675f86-900c-401c-8892-958cc63b46fa '!=' 86675f86-900c-401c-8892-958cc63b46fa ']' 00:19:43.626 05:17:02 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:19:43.626 05:17:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:43.626 05:17:02 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:43.626 05:17:02 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:43.884 [2024-07-26 05:17:02.897454] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.884 05:17:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.143 05:17:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.143 "name": "raid_bdev1", 00:19:44.143 "uuid": "86675f86-900c-401c-8892-958cc63b46fa", 00:19:44.143 "strip_size_kb": 0, 00:19:44.143 "state": "online", 00:19:44.143 "raid_level": "raid1", 00:19:44.143 "superblock": true, 00:19:44.143 "num_base_bdevs": 4, 00:19:44.143 "num_base_bdevs_discovered": 3, 00:19:44.143 "num_base_bdevs_operational": 3, 00:19:44.143 "base_bdevs_list": [ 00:19:44.143 { 00:19:44.143 "name": null, 00:19:44.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.143 "is_configured": false, 00:19:44.143 "data_offset": 2048, 00:19:44.143 "data_size": 63488 00:19:44.143 }, 00:19:44.143 { 00:19:44.143 "name": "pt2", 00:19:44.143 "uuid": "863aa765-b31c-568a-a7b3-2e0d646000dd", 00:19:44.143 "is_configured": true, 00:19:44.143 "data_offset": 2048, 00:19:44.143 "data_size": 63488 00:19:44.143 }, 00:19:44.143 { 00:19:44.143 "name": "pt3", 00:19:44.143 "uuid": "68ca2bc3-ce64-52ca-8c36-d48c6fe35287", 00:19:44.143 "is_configured": true, 00:19:44.143 "data_offset": 2048, 00:19:44.143 "data_size": 63488 00:19:44.143 }, 00:19:44.143 { 00:19:44.143 "name": "pt4", 00:19:44.143 "uuid": "4b794df1-b14b-5b71-83cb-447a73c6736d", 00:19:44.143 "is_configured": true, 00:19:44.143 "data_offset": 2048, 00:19:44.143 "data_size": 63488 00:19:44.143 } 00:19:44.143 ] 00:19:44.143 }' 00:19:44.143 05:17:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.143 05:17:03 -- common/autotest_common.sh@10 -- # set +x 00:19:44.411 05:17:03 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:44.669 [2024-07-26 05:17:03.717688] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.669 [2024-07-26 05:17:03.717722] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.669 [2024-07-26 05:17:03.717791] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.669 [2024-07-26 05:17:03.717867] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.670 [2024-07-26 05:17:03.717880] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:19:44.670 05:17:03 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.670 05:17:03 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:19:44.928 05:17:03 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:19:44.928 05:17:03 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:19:44.928 05:17:03 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:19:44.928 05:17:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:44.928 05:17:03 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:45.187 05:17:04 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:45.187 05:17:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:45.187 05:17:04 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:45.446 05:17:04 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:45.446 05:17:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:45.446 05:17:04 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:45.705 05:17:04 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:45.705 05:17:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:45.705 05:17:04 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:19:45.705 05:17:04 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:45.705 05:17:04 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:45.964 [2024-07-26 05:17:04.926108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:45.964 [2024-07-26 05:17:04.926436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.964 [2024-07-26 05:17:04.926486] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:19:45.964 [2024-07-26 05:17:04.926501] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.964 [2024-07-26 05:17:04.928839] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.964 [2024-07-26 05:17:04.928876] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:45.964 [2024-07-26 05:17:04.929004] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:45.964 [2024-07-26 05:17:04.929066] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:45.964 pt2 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.964 05:17:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.223 05:17:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.223 "name": "raid_bdev1", 00:19:46.223 "uuid": "86675f86-900c-401c-8892-958cc63b46fa", 00:19:46.223 "strip_size_kb": 0, 00:19:46.223 "state": "configuring", 00:19:46.223 "raid_level": "raid1", 00:19:46.223 "superblock": true, 00:19:46.223 "num_base_bdevs": 4, 00:19:46.223 "num_base_bdevs_discovered": 1, 00:19:46.223 "num_base_bdevs_operational": 3, 00:19:46.223 "base_bdevs_list": [ 00:19:46.223 { 00:19:46.223 "name": null, 00:19:46.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.223 "is_configured": false, 00:19:46.223 "data_offset": 2048, 00:19:46.223 "data_size": 63488 00:19:46.223 }, 00:19:46.223 { 00:19:46.223 "name": "pt2", 00:19:46.223 "uuid": "863aa765-b31c-568a-a7b3-2e0d646000dd", 00:19:46.223 "is_configured": true, 00:19:46.223 "data_offset": 2048, 00:19:46.223 "data_size": 63488 00:19:46.223 }, 00:19:46.223 { 00:19:46.223 "name": null, 00:19:46.223 "uuid": "68ca2bc3-ce64-52ca-8c36-d48c6fe35287", 00:19:46.223 "is_configured": false, 00:19:46.223 "data_offset": 2048, 00:19:46.223 "data_size": 63488 00:19:46.223 }, 00:19:46.223 { 00:19:46.223 "name": null, 00:19:46.223 "uuid": "4b794df1-b14b-5b71-83cb-447a73c6736d", 00:19:46.223 "is_configured": false, 00:19:46.223 "data_offset": 2048, 00:19:46.223 "data_size": 63488 00:19:46.223 } 00:19:46.223 ] 00:19:46.223 }' 00:19:46.223 05:17:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.223 05:17:05 -- common/autotest_common.sh@10 -- # set +x 00:19:46.482 05:17:05 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:46.482 05:17:05 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:46.482 05:17:05 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:46.740 [2024-07-26 05:17:05.646298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:46.740 [2024-07-26 05:17:05.646408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.740 [2024-07-26 05:17:05.646442] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:19:46.740 [2024-07-26 05:17:05.646457] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.740 [2024-07-26 05:17:05.646974] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.740 [2024-07-26 05:17:05.647002] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:46.740 [2024-07-26 05:17:05.647169] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:46.740 [2024-07-26 05:17:05.647200] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:46.740 pt3 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.740 05:17:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.999 05:17:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.999 "name": "raid_bdev1", 00:19:46.999 "uuid": "86675f86-900c-401c-8892-958cc63b46fa", 00:19:46.999 "strip_size_kb": 0, 00:19:46.999 "state": "configuring", 00:19:46.999 "raid_level": "raid1", 00:19:46.999 "superblock": true, 00:19:46.999 "num_base_bdevs": 4, 00:19:46.999 "num_base_bdevs_discovered": 2, 00:19:46.999 "num_base_bdevs_operational": 3, 00:19:46.999 "base_bdevs_list": [ 00:19:46.999 { 00:19:46.999 "name": null, 00:19:46.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.999 "is_configured": false, 00:19:46.999 "data_offset": 2048, 00:19:46.999 "data_size": 63488 00:19:46.999 }, 00:19:46.999 { 00:19:46.999 "name": "pt2", 00:19:46.999 "uuid": "863aa765-b31c-568a-a7b3-2e0d646000dd", 00:19:46.999 "is_configured": true, 00:19:46.999 "data_offset": 2048, 00:19:46.999 "data_size": 63488 00:19:46.999 }, 00:19:46.999 { 00:19:46.999 "name": "pt3", 00:19:46.999 "uuid": "68ca2bc3-ce64-52ca-8c36-d48c6fe35287", 00:19:46.999 "is_configured": true, 00:19:46.999 "data_offset": 2048, 00:19:46.999 "data_size": 63488 00:19:46.999 }, 00:19:46.999 { 00:19:46.999 "name": null, 00:19:46.999 "uuid": "4b794df1-b14b-5b71-83cb-447a73c6736d", 00:19:46.999 "is_configured": false, 00:19:46.999 "data_offset": 2048, 00:19:46.999 "data_size": 63488 00:19:46.999 } 00:19:46.999 ] 00:19:46.999 }' 00:19:46.999 05:17:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.999 05:17:05 -- common/autotest_common.sh@10 -- # set +x 00:19:47.258 05:17:06 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:47.258 05:17:06 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:47.258 05:17:06 -- bdev/bdev_raid.sh@462 -- # i=3 00:19:47.258 05:17:06 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:47.258 [2024-07-26 05:17:06.362485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:47.258 [2024-07-26 05:17:06.362566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.258 [2024-07-26 05:17:06.362602] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:19:47.258 [2024-07-26 05:17:06.362615] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.258 [2024-07-26 05:17:06.363119] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.258 [2024-07-26 05:17:06.363141] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:47.258 [2024-07-26 05:17:06.363234] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:47.258 [2024-07-26 05:17:06.363315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:47.258 [2024-07-26 05:17:06.363523] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ba80 00:19:47.258 [2024-07-26 05:17:06.363538] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:47.258 [2024-07-26 05:17:06.363653] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:19:47.258 [2024-07-26 05:17:06.364057] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ba80 00:19:47.258 [2024-07-26 05:17:06.364108] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ba80 00:19:47.258 [2024-07-26 05:17:06.364250] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.516 pt4 00:19:47.516 05:17:06 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:47.516 05:17:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:47.516 05:17:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:47.516 05:17:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.516 05:17:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.517 05:17:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:47.517 05:17:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.517 05:17:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.517 05:17:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.517 05:17:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.517 05:17:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.517 05:17:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.517 05:17:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:47.517 "name": "raid_bdev1", 00:19:47.517 "uuid": "86675f86-900c-401c-8892-958cc63b46fa", 00:19:47.517 "strip_size_kb": 0, 00:19:47.517 "state": "online", 00:19:47.517 "raid_level": "raid1", 00:19:47.517 "superblock": true, 00:19:47.517 "num_base_bdevs": 4, 00:19:47.517 "num_base_bdevs_discovered": 3, 00:19:47.517 "num_base_bdevs_operational": 3, 00:19:47.517 "base_bdevs_list": [ 00:19:47.517 { 00:19:47.517 "name": null, 00:19:47.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.517 "is_configured": false, 00:19:47.517 "data_offset": 2048, 00:19:47.517 "data_size": 63488 00:19:47.517 }, 00:19:47.517 { 00:19:47.517 "name": "pt2", 00:19:47.517 "uuid": "863aa765-b31c-568a-a7b3-2e0d646000dd", 00:19:47.517 "is_configured": true, 00:19:47.517 "data_offset": 2048, 00:19:47.517 "data_size": 63488 00:19:47.517 }, 00:19:47.517 { 00:19:47.517 "name": "pt3", 00:19:47.517 "uuid": "68ca2bc3-ce64-52ca-8c36-d48c6fe35287", 00:19:47.517 "is_configured": true, 00:19:47.517 "data_offset": 2048, 00:19:47.517 "data_size": 63488 00:19:47.517 }, 00:19:47.517 { 00:19:47.517 "name": "pt4", 00:19:47.517 "uuid": "4b794df1-b14b-5b71-83cb-447a73c6736d", 00:19:47.517 "is_configured": true, 00:19:47.517 "data_offset": 2048, 00:19:47.517 "data_size": 63488 00:19:47.517 } 00:19:47.517 ] 00:19:47.517 }' 00:19:47.517 05:17:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:47.517 05:17:06 -- common/autotest_common.sh@10 -- # set +x 00:19:48.083 05:17:06 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:19:48.083 05:17:06 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:48.378 [2024-07-26 05:17:07.202814] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:48.378 [2024-07-26 05:17:07.202853] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.378 [2024-07-26 05:17:07.202930] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.378 [2024-07-26 05:17:07.203015] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.378 [2024-07-26 05:17:07.203049] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state offline 00:19:48.378 05:17:07 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.378 05:17:07 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:19:48.378 05:17:07 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:19:48.378 05:17:07 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:19:48.378 05:17:07 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:48.674 [2024-07-26 05:17:07.623010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:48.674 [2024-07-26 05:17:07.623340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.674 [2024-07-26 05:17:07.623399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:19:48.674 [2024-07-26 05:17:07.623418] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.674 [2024-07-26 05:17:07.626231] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.674 [2024-07-26 05:17:07.626275] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:48.674 [2024-07-26 05:17:07.626404] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:48.674 [2024-07-26 05:17:07.626495] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:48.674 pt1 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.674 05:17:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.932 05:17:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.932 "name": "raid_bdev1", 00:19:48.932 "uuid": "86675f86-900c-401c-8892-958cc63b46fa", 00:19:48.932 "strip_size_kb": 0, 00:19:48.932 "state": "configuring", 00:19:48.932 "raid_level": "raid1", 00:19:48.932 "superblock": true, 00:19:48.932 "num_base_bdevs": 4, 00:19:48.932 "num_base_bdevs_discovered": 1, 00:19:48.933 "num_base_bdevs_operational": 4, 00:19:48.933 "base_bdevs_list": [ 00:19:48.933 { 00:19:48.933 "name": "pt1", 00:19:48.933 "uuid": "12712af2-2d27-56b9-9bb9-090860364a3c", 00:19:48.933 "is_configured": true, 00:19:48.933 "data_offset": 2048, 00:19:48.933 "data_size": 63488 00:19:48.933 }, 00:19:48.933 { 00:19:48.933 "name": null, 00:19:48.933 "uuid": "863aa765-b31c-568a-a7b3-2e0d646000dd", 00:19:48.933 "is_configured": false, 00:19:48.933 "data_offset": 2048, 00:19:48.933 "data_size": 63488 00:19:48.933 }, 00:19:48.933 { 00:19:48.933 "name": null, 00:19:48.933 "uuid": "68ca2bc3-ce64-52ca-8c36-d48c6fe35287", 00:19:48.933 "is_configured": false, 00:19:48.933 "data_offset": 2048, 00:19:48.933 "data_size": 63488 00:19:48.933 }, 00:19:48.933 { 00:19:48.933 "name": null, 00:19:48.933 "uuid": "4b794df1-b14b-5b71-83cb-447a73c6736d", 00:19:48.933 "is_configured": false, 00:19:48.933 "data_offset": 2048, 00:19:48.933 "data_size": 63488 00:19:48.933 } 00:19:48.933 ] 00:19:48.933 }' 00:19:48.933 05:17:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.933 05:17:07 -- common/autotest_common.sh@10 -- # set +x 00:19:49.191 05:17:08 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:19:49.191 05:17:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:49.191 05:17:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:49.449 05:17:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:49.449 05:17:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:49.449 05:17:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:49.707 05:17:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:49.707 05:17:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:49.707 05:17:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:49.966 05:17:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:49.966 05:17:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:49.966 05:17:08 -- bdev/bdev_raid.sh@489 -- # i=3 00:19:49.966 05:17:08 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:49.966 [2024-07-26 05:17:09.065535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:49.966 [2024-07-26 05:17:09.065685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.966 [2024-07-26 05:17:09.065716] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000cc80 00:19:49.966 [2024-07-26 05:17:09.065732] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.966 [2024-07-26 05:17:09.066371] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.966 [2024-07-26 05:17:09.066402] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:49.966 [2024-07-26 05:17:09.066506] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:49.966 [2024-07-26 05:17:09.066533] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:49.966 [2024-07-26 05:17:09.066560] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:49.966 [2024-07-26 05:17:09.066599] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c980 name raid_bdev1, state configuring 00:19:49.966 [2024-07-26 05:17:09.066691] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:49.966 pt4 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.225 05:17:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.483 05:17:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:50.483 "name": "raid_bdev1", 00:19:50.483 "uuid": "86675f86-900c-401c-8892-958cc63b46fa", 00:19:50.483 "strip_size_kb": 0, 00:19:50.483 "state": "configuring", 00:19:50.483 "raid_level": "raid1", 00:19:50.483 "superblock": true, 00:19:50.483 "num_base_bdevs": 4, 00:19:50.483 "num_base_bdevs_discovered": 1, 00:19:50.483 "num_base_bdevs_operational": 3, 00:19:50.483 "base_bdevs_list": [ 00:19:50.483 { 00:19:50.483 "name": null, 00:19:50.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.483 "is_configured": false, 00:19:50.483 "data_offset": 2048, 00:19:50.483 "data_size": 63488 00:19:50.483 }, 00:19:50.483 { 00:19:50.483 "name": null, 00:19:50.483 "uuid": "863aa765-b31c-568a-a7b3-2e0d646000dd", 00:19:50.483 "is_configured": false, 00:19:50.483 "data_offset": 2048, 00:19:50.483 "data_size": 63488 00:19:50.483 }, 00:19:50.483 { 00:19:50.483 "name": null, 00:19:50.483 "uuid": "68ca2bc3-ce64-52ca-8c36-d48c6fe35287", 00:19:50.483 "is_configured": false, 00:19:50.483 "data_offset": 2048, 00:19:50.483 "data_size": 63488 00:19:50.483 }, 00:19:50.483 { 00:19:50.483 "name": "pt4", 00:19:50.483 "uuid": "4b794df1-b14b-5b71-83cb-447a73c6736d", 00:19:50.483 "is_configured": true, 00:19:50.483 "data_offset": 2048, 00:19:50.483 "data_size": 63488 00:19:50.483 } 00:19:50.483 ] 00:19:50.483 }' 00:19:50.483 05:17:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:50.483 05:17:09 -- common/autotest_common.sh@10 -- # set +x 00:19:50.741 05:17:09 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:19:50.741 05:17:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:50.741 05:17:09 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:51.000 [2024-07-26 05:17:09.953909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:51.000 [2024-07-26 05:17:09.954082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.000 [2024-07-26 05:17:09.954140] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:19:51.000 [2024-07-26 05:17:09.954167] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.000 [2024-07-26 05:17:09.954780] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.000 [2024-07-26 05:17:09.954848] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:51.000 [2024-07-26 05:17:09.954963] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:51.000 [2024-07-26 05:17:09.954992] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:51.000 pt2 00:19:51.000 05:17:09 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:51.000 05:17:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:51.000 05:17:09 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:51.258 [2024-07-26 05:17:10.274031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:51.258 [2024-07-26 05:17:10.274413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.258 [2024-07-26 05:17:10.274465] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d580 00:19:51.258 [2024-07-26 05:17:10.274482] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.258 [2024-07-26 05:17:10.275030] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.258 [2024-07-26 05:17:10.275054] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:51.258 [2024-07-26 05:17:10.275180] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:51.258 [2024-07-26 05:17:10.275215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:51.258 [2024-07-26 05:17:10.275358] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:19:51.258 [2024-07-26 05:17:10.275371] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:51.258 [2024-07-26 05:17:10.275462] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:19:51.258 [2024-07-26 05:17:10.275772] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:19:51.258 [2024-07-26 05:17:10.275805] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:19:51.258 [2024-07-26 05:17:10.276009] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.258 pt3 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.258 05:17:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.516 05:17:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:51.516 "name": "raid_bdev1", 00:19:51.516 "uuid": "86675f86-900c-401c-8892-958cc63b46fa", 00:19:51.516 "strip_size_kb": 0, 00:19:51.516 "state": "online", 00:19:51.516 "raid_level": "raid1", 00:19:51.516 "superblock": true, 00:19:51.516 "num_base_bdevs": 4, 00:19:51.516 "num_base_bdevs_discovered": 3, 00:19:51.516 "num_base_bdevs_operational": 3, 00:19:51.516 "base_bdevs_list": [ 00:19:51.516 { 00:19:51.516 "name": null, 00:19:51.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.516 "is_configured": false, 00:19:51.516 "data_offset": 2048, 00:19:51.516 "data_size": 63488 00:19:51.516 }, 00:19:51.516 { 00:19:51.516 "name": "pt2", 00:19:51.516 "uuid": "863aa765-b31c-568a-a7b3-2e0d646000dd", 00:19:51.516 "is_configured": true, 00:19:51.516 "data_offset": 2048, 00:19:51.516 "data_size": 63488 00:19:51.516 }, 00:19:51.516 { 00:19:51.516 "name": "pt3", 00:19:51.516 "uuid": "68ca2bc3-ce64-52ca-8c36-d48c6fe35287", 00:19:51.516 "is_configured": true, 00:19:51.516 "data_offset": 2048, 00:19:51.516 "data_size": 63488 00:19:51.516 }, 00:19:51.516 { 00:19:51.516 "name": "pt4", 00:19:51.516 "uuid": "4b794df1-b14b-5b71-83cb-447a73c6736d", 00:19:51.516 "is_configured": true, 00:19:51.516 "data_offset": 2048, 00:19:51.516 "data_size": 63488 00:19:51.516 } 00:19:51.516 ] 00:19:51.516 }' 00:19:51.517 05:17:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:51.517 05:17:10 -- common/autotest_common.sh@10 -- # set +x 00:19:51.775 05:17:10 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:51.775 05:17:10 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:19:52.034 [2024-07-26 05:17:11.054474] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.034 05:17:11 -- bdev/bdev_raid.sh@506 -- # '[' 86675f86-900c-401c-8892-958cc63b46fa '!=' 86675f86-900c-401c-8892-958cc63b46fa ']' 00:19:52.034 05:17:11 -- bdev/bdev_raid.sh@511 -- # killprocess 77515 00:19:52.034 05:17:11 -- common/autotest_common.sh@926 -- # '[' -z 77515 ']' 00:19:52.034 05:17:11 -- common/autotest_common.sh@930 -- # kill -0 77515 00:19:52.034 05:17:11 -- common/autotest_common.sh@931 -- # uname 00:19:52.034 05:17:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:52.034 05:17:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77515 00:19:52.034 killing process with pid 77515 00:19:52.034 05:17:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:52.034 05:17:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:52.034 05:17:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77515' 00:19:52.034 05:17:11 -- common/autotest_common.sh@945 -- # kill 77515 00:19:52.034 05:17:11 -- common/autotest_common.sh@950 -- # wait 77515 00:19:52.034 [2024-07-26 05:17:11.102798] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:52.034 [2024-07-26 05:17:11.102873] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:52.034 [2024-07-26 05:17:11.102995] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:52.034 [2024-07-26 05:17:11.103052] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:19:52.302 [2024-07-26 05:17:11.371140] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:53.239 05:17:12 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:53.239 00:19:53.239 real 0m18.674s 00:19:53.239 user 0m32.452s 00:19:53.239 sys 0m2.848s 00:19:53.239 05:17:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.239 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:53.239 ************************************ 00:19:53.239 END TEST raid_superblock_test 00:19:53.239 ************************************ 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:19:53.498 05:17:12 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:53.498 05:17:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:53.498 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:53.498 ************************************ 00:19:53.498 START TEST raid_rebuild_test 00:19:53.498 ************************************ 00:19:53.498 05:17:12 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@544 -- # raid_pid=78126 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@545 -- # waitforlisten 78126 /var/tmp/spdk-raid.sock 00:19:53.498 05:17:12 -- common/autotest_common.sh@819 -- # '[' -z 78126 ']' 00:19:53.498 05:17:12 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:53.498 05:17:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:53.498 05:17:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:53.498 05:17:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:53.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:53.498 05:17:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:53.498 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:53.498 [2024-07-26 05:17:12.445348] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:53.498 [2024-07-26 05:17:12.445771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78126 ] 00:19:53.498 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:53.498 Zero copy mechanism will not be used. 00:19:53.756 [2024-07-26 05:17:12.620208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.757 [2024-07-26 05:17:12.841642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.015 [2024-07-26 05:17:12.997823] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.274 05:17:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:54.274 05:17:13 -- common/autotest_common.sh@852 -- # return 0 00:19:54.274 05:17:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:54.274 05:17:13 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:54.274 05:17:13 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:54.532 BaseBdev1 00:19:54.532 05:17:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:54.532 05:17:13 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:54.532 05:17:13 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:54.791 BaseBdev2 00:19:54.791 05:17:13 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:55.049 spare_malloc 00:19:55.049 05:17:13 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:55.308 spare_delay 00:19:55.308 05:17:14 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:55.308 [2024-07-26 05:17:14.368765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:55.308 [2024-07-26 05:17:14.368862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.308 [2024-07-26 05:17:14.368896] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:19:55.308 [2024-07-26 05:17:14.368913] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.308 [2024-07-26 05:17:14.371967] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.308 spare 00:19:55.308 [2024-07-26 05:17:14.372251] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:55.308 05:17:14 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:55.567 [2024-07-26 05:17:14.589182] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.567 [2024-07-26 05:17:14.592106] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:55.567 [2024-07-26 05:17:14.592402] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:19:55.567 [2024-07-26 05:17:14.592558] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:55.567 [2024-07-26 05:17:14.592742] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:19:55.567 [2024-07-26 05:17:14.593297] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:19:55.567 [2024-07-26 05:17:14.593440] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:19:55.567 [2024-07-26 05:17:14.593825] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.567 05:17:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.826 05:17:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:55.826 "name": "raid_bdev1", 00:19:55.826 "uuid": "d4cb13cc-c7b5-4c2c-ac3c-6fb8008033b4", 00:19:55.826 "strip_size_kb": 0, 00:19:55.826 "state": "online", 00:19:55.826 "raid_level": "raid1", 00:19:55.826 "superblock": false, 00:19:55.826 "num_base_bdevs": 2, 00:19:55.826 "num_base_bdevs_discovered": 2, 00:19:55.826 "num_base_bdevs_operational": 2, 00:19:55.826 "base_bdevs_list": [ 00:19:55.826 { 00:19:55.826 "name": "BaseBdev1", 00:19:55.826 "uuid": "f6574c93-c925-4f0c-8bf2-b7ca9395d525", 00:19:55.826 "is_configured": true, 00:19:55.826 "data_offset": 0, 00:19:55.826 "data_size": 65536 00:19:55.826 }, 00:19:55.826 { 00:19:55.826 "name": "BaseBdev2", 00:19:55.826 "uuid": "1ae9343c-4d50-476f-a8a7-6af5d428913b", 00:19:55.826 "is_configured": true, 00:19:55.826 "data_offset": 0, 00:19:55.826 "data_size": 65536 00:19:55.826 } 00:19:55.826 ] 00:19:55.826 }' 00:19:55.826 05:17:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:55.826 05:17:14 -- common/autotest_common.sh@10 -- # set +x 00:19:56.083 05:17:15 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:56.083 05:17:15 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:56.341 [2024-07-26 05:17:15.362217] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:56.341 05:17:15 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:56.341 05:17:15 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:56.341 05:17:15 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.600 05:17:15 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:56.600 05:17:15 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:56.600 05:17:15 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:56.600 05:17:15 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:56.600 05:17:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:56.600 05:17:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:56.600 05:17:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:56.600 05:17:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:56.600 05:17:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:56.600 05:17:15 -- bdev/nbd_common.sh@12 -- # local i 00:19:56.600 05:17:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:56.600 05:17:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.600 05:17:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:56.859 [2024-07-26 05:17:15.902220] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:56.859 /dev/nbd0 00:19:56.859 05:17:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:56.859 05:17:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:56.859 05:17:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:56.859 05:17:15 -- common/autotest_common.sh@857 -- # local i 00:19:56.859 05:17:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:56.859 05:17:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:56.859 05:17:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:56.859 05:17:15 -- common/autotest_common.sh@861 -- # break 00:19:56.859 05:17:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:56.859 05:17:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:56.859 05:17:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.859 1+0 records in 00:19:56.859 1+0 records out 00:19:56.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607117 s, 6.7 MB/s 00:19:56.859 05:17:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.859 05:17:15 -- common/autotest_common.sh@874 -- # size=4096 00:19:56.859 05:17:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.859 05:17:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:56.859 05:17:15 -- common/autotest_common.sh@877 -- # return 0 00:19:56.859 05:17:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.859 05:17:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.859 05:17:15 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:56.859 05:17:15 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:56.859 05:17:15 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:03.452 65536+0 records in 00:20:03.452 65536+0 records out 00:20:03.452 33554432 bytes (34 MB, 32 MiB) copied, 5.34011 s, 6.3 MB/s 00:20:03.452 05:17:21 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:03.452 05:17:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:03.452 05:17:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@51 -- # local i 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:03.453 [2024-07-26 05:17:21.532558] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@41 -- # break 00:20:03.453 05:17:21 -- bdev/nbd_common.sh@45 -- # return 0 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:03.453 [2024-07-26 05:17:21.728673] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:03.453 "name": "raid_bdev1", 00:20:03.453 "uuid": "d4cb13cc-c7b5-4c2c-ac3c-6fb8008033b4", 00:20:03.453 "strip_size_kb": 0, 00:20:03.453 "state": "online", 00:20:03.453 "raid_level": "raid1", 00:20:03.453 "superblock": false, 00:20:03.453 "num_base_bdevs": 2, 00:20:03.453 "num_base_bdevs_discovered": 1, 00:20:03.453 "num_base_bdevs_operational": 1, 00:20:03.453 "base_bdevs_list": [ 00:20:03.453 { 00:20:03.453 "name": null, 00:20:03.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.453 "is_configured": false, 00:20:03.453 "data_offset": 0, 00:20:03.453 "data_size": 65536 00:20:03.453 }, 00:20:03.453 { 00:20:03.453 "name": "BaseBdev2", 00:20:03.453 "uuid": "1ae9343c-4d50-476f-a8a7-6af5d428913b", 00:20:03.453 "is_configured": true, 00:20:03.453 "data_offset": 0, 00:20:03.453 "data_size": 65536 00:20:03.453 } 00:20:03.453 ] 00:20:03.453 }' 00:20:03.453 05:17:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:03.453 05:17:21 -- common/autotest_common.sh@10 -- # set +x 00:20:03.453 05:17:22 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:03.453 [2024-07-26 05:17:22.436852] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:03.453 [2024-07-26 05:17:22.436902] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:03.453 [2024-07-26 05:17:22.449015] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09480 00:20:03.453 [2024-07-26 05:17:22.451212] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:03.453 05:17:22 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:04.389 05:17:23 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.389 05:17:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:04.389 05:17:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:04.389 05:17:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:04.389 05:17:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:04.389 05:17:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.389 05:17:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.647 05:17:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:04.647 "name": "raid_bdev1", 00:20:04.647 "uuid": "d4cb13cc-c7b5-4c2c-ac3c-6fb8008033b4", 00:20:04.647 "strip_size_kb": 0, 00:20:04.647 "state": "online", 00:20:04.647 "raid_level": "raid1", 00:20:04.647 "superblock": false, 00:20:04.647 "num_base_bdevs": 2, 00:20:04.647 "num_base_bdevs_discovered": 2, 00:20:04.647 "num_base_bdevs_operational": 2, 00:20:04.647 "process": { 00:20:04.647 "type": "rebuild", 00:20:04.647 "target": "spare", 00:20:04.647 "progress": { 00:20:04.647 "blocks": 24576, 00:20:04.647 "percent": 37 00:20:04.647 } 00:20:04.647 }, 00:20:04.647 "base_bdevs_list": [ 00:20:04.647 { 00:20:04.647 "name": "spare", 00:20:04.647 "uuid": "f95c8517-7d4b-5eea-a866-1d9b1b5071f2", 00:20:04.647 "is_configured": true, 00:20:04.647 "data_offset": 0, 00:20:04.647 "data_size": 65536 00:20:04.647 }, 00:20:04.647 { 00:20:04.647 "name": "BaseBdev2", 00:20:04.647 "uuid": "1ae9343c-4d50-476f-a8a7-6af5d428913b", 00:20:04.647 "is_configured": true, 00:20:04.647 "data_offset": 0, 00:20:04.647 "data_size": 65536 00:20:04.647 } 00:20:04.647 ] 00:20:04.647 }' 00:20:04.647 05:17:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:04.647 05:17:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.647 05:17:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:04.647 05:17:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.647 05:17:23 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:04.905 [2024-07-26 05:17:23.957316] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:04.905 [2024-07-26 05:17:23.958175] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:04.905 [2024-07-26 05:17:23.958279] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.905 05:17:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.163 05:17:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.163 "name": "raid_bdev1", 00:20:05.163 "uuid": "d4cb13cc-c7b5-4c2c-ac3c-6fb8008033b4", 00:20:05.163 "strip_size_kb": 0, 00:20:05.163 "state": "online", 00:20:05.163 "raid_level": "raid1", 00:20:05.163 "superblock": false, 00:20:05.163 "num_base_bdevs": 2, 00:20:05.163 "num_base_bdevs_discovered": 1, 00:20:05.163 "num_base_bdevs_operational": 1, 00:20:05.163 "base_bdevs_list": [ 00:20:05.163 { 00:20:05.163 "name": null, 00:20:05.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.163 "is_configured": false, 00:20:05.163 "data_offset": 0, 00:20:05.163 "data_size": 65536 00:20:05.163 }, 00:20:05.163 { 00:20:05.163 "name": "BaseBdev2", 00:20:05.163 "uuid": "1ae9343c-4d50-476f-a8a7-6af5d428913b", 00:20:05.163 "is_configured": true, 00:20:05.163 "data_offset": 0, 00:20:05.163 "data_size": 65536 00:20:05.163 } 00:20:05.163 ] 00:20:05.163 }' 00:20:05.163 05:17:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.163 05:17:24 -- common/autotest_common.sh@10 -- # set +x 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:05.730 "name": "raid_bdev1", 00:20:05.730 "uuid": "d4cb13cc-c7b5-4c2c-ac3c-6fb8008033b4", 00:20:05.730 "strip_size_kb": 0, 00:20:05.730 "state": "online", 00:20:05.730 "raid_level": "raid1", 00:20:05.730 "superblock": false, 00:20:05.730 "num_base_bdevs": 2, 00:20:05.730 "num_base_bdevs_discovered": 1, 00:20:05.730 "num_base_bdevs_operational": 1, 00:20:05.730 "base_bdevs_list": [ 00:20:05.730 { 00:20:05.730 "name": null, 00:20:05.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.730 "is_configured": false, 00:20:05.730 "data_offset": 0, 00:20:05.730 "data_size": 65536 00:20:05.730 }, 00:20:05.730 { 00:20:05.730 "name": "BaseBdev2", 00:20:05.730 "uuid": "1ae9343c-4d50-476f-a8a7-6af5d428913b", 00:20:05.730 "is_configured": true, 00:20:05.730 "data_offset": 0, 00:20:05.730 "data_size": 65536 00:20:05.730 } 00:20:05.730 ] 00:20:05.730 }' 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:05.730 05:17:24 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:05.988 [2024-07-26 05:17:25.068319] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:05.988 [2024-07-26 05:17:25.068363] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:05.988 [2024-07-26 05:17:25.079394] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09550 00:20:05.988 [2024-07-26 05:17:25.081304] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:05.988 05:17:25 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:07.364 05:17:26 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.364 05:17:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:07.364 05:17:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:07.364 05:17:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:07.364 05:17:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:07.364 05:17:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.364 05:17:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.364 05:17:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:07.364 "name": "raid_bdev1", 00:20:07.364 "uuid": "d4cb13cc-c7b5-4c2c-ac3c-6fb8008033b4", 00:20:07.365 "strip_size_kb": 0, 00:20:07.365 "state": "online", 00:20:07.365 "raid_level": "raid1", 00:20:07.365 "superblock": false, 00:20:07.365 "num_base_bdevs": 2, 00:20:07.365 "num_base_bdevs_discovered": 2, 00:20:07.365 "num_base_bdevs_operational": 2, 00:20:07.365 "process": { 00:20:07.365 "type": "rebuild", 00:20:07.365 "target": "spare", 00:20:07.365 "progress": { 00:20:07.365 "blocks": 22528, 00:20:07.365 "percent": 34 00:20:07.365 } 00:20:07.365 }, 00:20:07.365 "base_bdevs_list": [ 00:20:07.365 { 00:20:07.365 "name": "spare", 00:20:07.365 "uuid": "f95c8517-7d4b-5eea-a866-1d9b1b5071f2", 00:20:07.365 "is_configured": true, 00:20:07.365 "data_offset": 0, 00:20:07.365 "data_size": 65536 00:20:07.365 }, 00:20:07.365 { 00:20:07.365 "name": "BaseBdev2", 00:20:07.365 "uuid": "1ae9343c-4d50-476f-a8a7-6af5d428913b", 00:20:07.365 "is_configured": true, 00:20:07.365 "data_offset": 0, 00:20:07.365 "data_size": 65536 00:20:07.365 } 00:20:07.365 ] 00:20:07.365 }' 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@657 -- # local timeout=353 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.365 05:17:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.623 05:17:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:07.623 "name": "raid_bdev1", 00:20:07.623 "uuid": "d4cb13cc-c7b5-4c2c-ac3c-6fb8008033b4", 00:20:07.623 "strip_size_kb": 0, 00:20:07.623 "state": "online", 00:20:07.623 "raid_level": "raid1", 00:20:07.623 "superblock": false, 00:20:07.623 "num_base_bdevs": 2, 00:20:07.623 "num_base_bdevs_discovered": 2, 00:20:07.623 "num_base_bdevs_operational": 2, 00:20:07.623 "process": { 00:20:07.623 "type": "rebuild", 00:20:07.623 "target": "spare", 00:20:07.623 "progress": { 00:20:07.623 "blocks": 28672, 00:20:07.623 "percent": 43 00:20:07.623 } 00:20:07.623 }, 00:20:07.623 "base_bdevs_list": [ 00:20:07.623 { 00:20:07.623 "name": "spare", 00:20:07.623 "uuid": "f95c8517-7d4b-5eea-a866-1d9b1b5071f2", 00:20:07.623 "is_configured": true, 00:20:07.623 "data_offset": 0, 00:20:07.623 "data_size": 65536 00:20:07.623 }, 00:20:07.623 { 00:20:07.623 "name": "BaseBdev2", 00:20:07.623 "uuid": "1ae9343c-4d50-476f-a8a7-6af5d428913b", 00:20:07.623 "is_configured": true, 00:20:07.623 "data_offset": 0, 00:20:07.623 "data_size": 65536 00:20:07.623 } 00:20:07.623 ] 00:20:07.623 }' 00:20:07.623 05:17:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:07.623 05:17:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:07.623 05:17:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:07.623 05:17:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.623 05:17:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:08.560 05:17:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:08.560 05:17:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.560 05:17:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:08.560 05:17:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:08.560 05:17:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:08.560 05:17:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:08.560 05:17:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.560 05:17:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.818 05:17:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:08.818 "name": "raid_bdev1", 00:20:08.818 "uuid": "d4cb13cc-c7b5-4c2c-ac3c-6fb8008033b4", 00:20:08.818 "strip_size_kb": 0, 00:20:08.818 "state": "online", 00:20:08.818 "raid_level": "raid1", 00:20:08.818 "superblock": false, 00:20:08.818 "num_base_bdevs": 2, 00:20:08.818 "num_base_bdevs_discovered": 2, 00:20:08.818 "num_base_bdevs_operational": 2, 00:20:08.818 "process": { 00:20:08.818 "type": "rebuild", 00:20:08.818 "target": "spare", 00:20:08.818 "progress": { 00:20:08.818 "blocks": 55296, 00:20:08.818 "percent": 84 00:20:08.818 } 00:20:08.818 }, 00:20:08.818 "base_bdevs_list": [ 00:20:08.818 { 00:20:08.818 "name": "spare", 00:20:08.818 "uuid": "f95c8517-7d4b-5eea-a866-1d9b1b5071f2", 00:20:08.818 "is_configured": true, 00:20:08.818 "data_offset": 0, 00:20:08.818 "data_size": 65536 00:20:08.818 }, 00:20:08.818 { 00:20:08.818 "name": "BaseBdev2", 00:20:08.818 "uuid": "1ae9343c-4d50-476f-a8a7-6af5d428913b", 00:20:08.818 "is_configured": true, 00:20:08.818 "data_offset": 0, 00:20:08.818 "data_size": 65536 00:20:08.818 } 00:20:08.818 ] 00:20:08.818 }' 00:20:08.818 05:17:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:08.818 05:17:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.818 05:17:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:08.818 05:17:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.818 05:17:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:09.386 [2024-07-26 05:17:28.295756] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:09.386 [2024-07-26 05:17:28.295832] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:09.386 [2024-07-26 05:17:28.295906] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.972 05:17:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:09.972 05:17:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.972 05:17:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:09.972 05:17:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:09.972 05:17:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:09.972 05:17:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:09.972 05:17:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.972 05:17:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:10.241 "name": "raid_bdev1", 00:20:10.241 "uuid": "d4cb13cc-c7b5-4c2c-ac3c-6fb8008033b4", 00:20:10.241 "strip_size_kb": 0, 00:20:10.241 "state": "online", 00:20:10.241 "raid_level": "raid1", 00:20:10.241 "superblock": false, 00:20:10.241 "num_base_bdevs": 2, 00:20:10.241 "num_base_bdevs_discovered": 2, 00:20:10.241 "num_base_bdevs_operational": 2, 00:20:10.241 "base_bdevs_list": [ 00:20:10.241 { 00:20:10.241 "name": "spare", 00:20:10.241 "uuid": "f95c8517-7d4b-5eea-a866-1d9b1b5071f2", 00:20:10.241 "is_configured": true, 00:20:10.241 "data_offset": 0, 00:20:10.241 "data_size": 65536 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "name": "BaseBdev2", 00:20:10.241 "uuid": "1ae9343c-4d50-476f-a8a7-6af5d428913b", 00:20:10.241 "is_configured": true, 00:20:10.241 "data_offset": 0, 00:20:10.241 "data_size": 65536 00:20:10.241 } 00:20:10.241 ] 00:20:10.241 }' 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@660 -- # break 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.241 05:17:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:10.500 "name": "raid_bdev1", 00:20:10.500 "uuid": "d4cb13cc-c7b5-4c2c-ac3c-6fb8008033b4", 00:20:10.500 "strip_size_kb": 0, 00:20:10.500 "state": "online", 00:20:10.500 "raid_level": "raid1", 00:20:10.500 "superblock": false, 00:20:10.500 "num_base_bdevs": 2, 00:20:10.500 "num_base_bdevs_discovered": 2, 00:20:10.500 "num_base_bdevs_operational": 2, 00:20:10.500 "base_bdevs_list": [ 00:20:10.500 { 00:20:10.500 "name": "spare", 00:20:10.500 "uuid": "f95c8517-7d4b-5eea-a866-1d9b1b5071f2", 00:20:10.500 "is_configured": true, 00:20:10.500 "data_offset": 0, 00:20:10.500 "data_size": 65536 00:20:10.500 }, 00:20:10.500 { 00:20:10.500 "name": "BaseBdev2", 00:20:10.500 "uuid": "1ae9343c-4d50-476f-a8a7-6af5d428913b", 00:20:10.500 "is_configured": true, 00:20:10.500 "data_offset": 0, 00:20:10.500 "data_size": 65536 00:20:10.500 } 00:20:10.500 ] 00:20:10.500 }' 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.500 05:17:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.758 05:17:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:10.758 "name": "raid_bdev1", 00:20:10.758 "uuid": "d4cb13cc-c7b5-4c2c-ac3c-6fb8008033b4", 00:20:10.758 "strip_size_kb": 0, 00:20:10.758 "state": "online", 00:20:10.758 "raid_level": "raid1", 00:20:10.758 "superblock": false, 00:20:10.758 "num_base_bdevs": 2, 00:20:10.758 "num_base_bdevs_discovered": 2, 00:20:10.758 "num_base_bdevs_operational": 2, 00:20:10.758 "base_bdevs_list": [ 00:20:10.758 { 00:20:10.758 "name": "spare", 00:20:10.758 "uuid": "f95c8517-7d4b-5eea-a866-1d9b1b5071f2", 00:20:10.758 "is_configured": true, 00:20:10.758 "data_offset": 0, 00:20:10.758 "data_size": 65536 00:20:10.758 }, 00:20:10.758 { 00:20:10.758 "name": "BaseBdev2", 00:20:10.758 "uuid": "1ae9343c-4d50-476f-a8a7-6af5d428913b", 00:20:10.758 "is_configured": true, 00:20:10.758 "data_offset": 0, 00:20:10.758 "data_size": 65536 00:20:10.758 } 00:20:10.758 ] 00:20:10.758 }' 00:20:10.758 05:17:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:10.758 05:17:29 -- common/autotest_common.sh@10 -- # set +x 00:20:11.017 05:17:29 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:11.275 [2024-07-26 05:17:30.181292] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.275 [2024-07-26 05:17:30.181333] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.275 [2024-07-26 05:17:30.181410] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.275 [2024-07-26 05:17:30.181482] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.275 [2024-07-26 05:17:30.181499] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:20:11.275 05:17:30 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.275 05:17:30 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:11.535 05:17:30 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:11.535 05:17:30 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:11.535 05:17:30 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:11.535 05:17:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:11.535 05:17:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:11.535 05:17:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:11.535 05:17:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:11.535 05:17:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:11.535 05:17:30 -- bdev/nbd_common.sh@12 -- # local i 00:20:11.535 05:17:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:11.535 05:17:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:11.535 05:17:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:11.803 /dev/nbd0 00:20:11.803 05:17:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:11.803 05:17:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:11.803 05:17:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:11.803 05:17:30 -- common/autotest_common.sh@857 -- # local i 00:20:11.803 05:17:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:11.803 05:17:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:11.803 05:17:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:11.803 05:17:30 -- common/autotest_common.sh@861 -- # break 00:20:11.803 05:17:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:11.803 05:17:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:11.803 05:17:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:11.803 1+0 records in 00:20:11.803 1+0 records out 00:20:11.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238019 s, 17.2 MB/s 00:20:11.803 05:17:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.803 05:17:30 -- common/autotest_common.sh@874 -- # size=4096 00:20:11.803 05:17:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.803 05:17:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:11.803 05:17:30 -- common/autotest_common.sh@877 -- # return 0 00:20:11.803 05:17:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:11.803 05:17:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:11.803 05:17:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:12.062 /dev/nbd1 00:20:12.062 05:17:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:12.062 05:17:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:12.062 05:17:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:12.062 05:17:30 -- common/autotest_common.sh@857 -- # local i 00:20:12.062 05:17:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:12.062 05:17:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:12.062 05:17:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:12.062 05:17:30 -- common/autotest_common.sh@861 -- # break 00:20:12.062 05:17:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:12.062 05:17:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:12.062 05:17:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:12.062 1+0 records in 00:20:12.062 1+0 records out 00:20:12.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298165 s, 13.7 MB/s 00:20:12.062 05:17:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:12.062 05:17:30 -- common/autotest_common.sh@874 -- # size=4096 00:20:12.062 05:17:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:12.062 05:17:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:12.062 05:17:30 -- common/autotest_common.sh@877 -- # return 0 00:20:12.062 05:17:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:12.062 05:17:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:12.062 05:17:30 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:12.062 05:17:31 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:12.062 05:17:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:12.062 05:17:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:12.062 05:17:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:12.062 05:17:31 -- bdev/nbd_common.sh@51 -- # local i 00:20:12.062 05:17:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.062 05:17:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:12.320 05:17:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:12.320 05:17:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:12.320 05:17:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:12.320 05:17:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.320 05:17:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.320 05:17:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:12.320 05:17:31 -- bdev/nbd_common.sh@41 -- # break 00:20:12.320 05:17:31 -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.320 05:17:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.320 05:17:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:12.579 05:17:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:12.579 05:17:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:12.579 05:17:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:12.579 05:17:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.579 05:17:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.579 05:17:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:12.579 05:17:31 -- bdev/nbd_common.sh@41 -- # break 00:20:12.579 05:17:31 -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.579 05:17:31 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:12.579 05:17:31 -- bdev/bdev_raid.sh@709 -- # killprocess 78126 00:20:12.579 05:17:31 -- common/autotest_common.sh@926 -- # '[' -z 78126 ']' 00:20:12.579 05:17:31 -- common/autotest_common.sh@930 -- # kill -0 78126 00:20:12.579 05:17:31 -- common/autotest_common.sh@931 -- # uname 00:20:12.579 05:17:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:12.579 05:17:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78126 00:20:12.579 killing process with pid 78126 00:20:12.579 Received shutdown signal, test time was about 60.000000 seconds 00:20:12.579 00:20:12.579 Latency(us) 00:20:12.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.579 =================================================================================================================== 00:20:12.579 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:12.579 05:17:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:12.579 05:17:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:12.579 05:17:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78126' 00:20:12.579 05:17:31 -- common/autotest_common.sh@945 -- # kill 78126 00:20:12.579 [2024-07-26 05:17:31.622863] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:12.579 05:17:31 -- common/autotest_common.sh@950 -- # wait 78126 00:20:12.837 [2024-07-26 05:17:31.820405] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:13.774 ************************************ 00:20:13.774 END TEST raid_rebuild_test 00:20:13.774 ************************************ 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:13.774 00:20:13.774 real 0m20.358s 00:20:13.774 user 0m25.484s 00:20:13.774 sys 0m4.143s 00:20:13.774 05:17:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.774 05:17:32 -- common/autotest_common.sh@10 -- # set +x 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:20:13.774 05:17:32 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:13.774 05:17:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:13.774 05:17:32 -- common/autotest_common.sh@10 -- # set +x 00:20:13.774 ************************************ 00:20:13.774 START TEST raid_rebuild_test_sb 00:20:13.774 ************************************ 00:20:13.774 05:17:32 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@544 -- # raid_pid=78620 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@545 -- # waitforlisten 78620 /var/tmp/spdk-raid.sock 00:20:13.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:13.774 05:17:32 -- common/autotest_common.sh@819 -- # '[' -z 78620 ']' 00:20:13.774 05:17:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:13.774 05:17:32 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:13.774 05:17:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:13.774 05:17:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:13.774 05:17:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:13.774 05:17:32 -- common/autotest_common.sh@10 -- # set +x 00:20:13.774 [2024-07-26 05:17:32.852336] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:13.774 [2024-07-26 05:17:32.852716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:20:13.774 Zero copy mechanism will not be used. 00:20:13.774 :6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78620 ] 00:20:14.033 [2024-07-26 05:17:33.022794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.291 [2024-07-26 05:17:33.184256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.291 [2024-07-26 05:17:33.326038] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.856 05:17:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:14.856 05:17:33 -- common/autotest_common.sh@852 -- # return 0 00:20:14.856 05:17:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:14.856 05:17:33 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:14.856 05:17:33 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:15.115 BaseBdev1_malloc 00:20:15.115 05:17:33 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:15.115 [2024-07-26 05:17:34.156453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:15.115 [2024-07-26 05:17:34.156542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.115 [2024-07-26 05:17:34.156574] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:20:15.115 [2024-07-26 05:17:34.156589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.115 [2024-07-26 05:17:34.159230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.115 [2024-07-26 05:17:34.159280] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:15.115 BaseBdev1 00:20:15.115 05:17:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:15.115 05:17:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:15.115 05:17:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:15.373 BaseBdev2_malloc 00:20:15.373 05:17:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:15.632 [2024-07-26 05:17:34.560162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:15.632 [2024-07-26 05:17:34.560246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.632 [2024-07-26 05:17:34.560284] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:20:15.632 [2024-07-26 05:17:34.560318] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.632 [2024-07-26 05:17:34.562578] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.632 BaseBdev2 00:20:15.632 [2024-07-26 05:17:34.562841] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:15.632 05:17:34 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:15.890 spare_malloc 00:20:15.890 05:17:34 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:15.890 spare_delay 00:20:15.890 05:17:34 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:16.149 [2024-07-26 05:17:35.137719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:16.149 [2024-07-26 05:17:35.137782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.149 [2024-07-26 05:17:35.137808] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:20:16.149 [2024-07-26 05:17:35.137822] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.149 [2024-07-26 05:17:35.140158] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.149 [2024-07-26 05:17:35.140202] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:16.149 spare 00:20:16.149 05:17:35 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:16.407 [2024-07-26 05:17:35.377842] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.407 [2024-07-26 05:17:35.379822] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.407 [2024-07-26 05:17:35.380227] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:20:16.407 [2024-07-26 05:17:35.380393] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:16.407 [2024-07-26 05:17:35.380632] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:20:16.407 [2024-07-26 05:17:35.381129] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:20:16.407 [2024-07-26 05:17:35.381266] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:20:16.407 [2024-07-26 05:17:35.381540] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.407 05:17:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.665 05:17:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.665 "name": "raid_bdev1", 00:20:16.665 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:16.665 "strip_size_kb": 0, 00:20:16.665 "state": "online", 00:20:16.665 "raid_level": "raid1", 00:20:16.665 "superblock": true, 00:20:16.665 "num_base_bdevs": 2, 00:20:16.665 "num_base_bdevs_discovered": 2, 00:20:16.666 "num_base_bdevs_operational": 2, 00:20:16.666 "base_bdevs_list": [ 00:20:16.666 { 00:20:16.666 "name": "BaseBdev1", 00:20:16.666 "uuid": "b19daa1d-e07e-51a9-ad48-e999303b7cae", 00:20:16.666 "is_configured": true, 00:20:16.666 "data_offset": 2048, 00:20:16.666 "data_size": 63488 00:20:16.666 }, 00:20:16.666 { 00:20:16.666 "name": "BaseBdev2", 00:20:16.666 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:16.666 "is_configured": true, 00:20:16.666 "data_offset": 2048, 00:20:16.666 "data_size": 63488 00:20:16.666 } 00:20:16.666 ] 00:20:16.666 }' 00:20:16.666 05:17:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.666 05:17:35 -- common/autotest_common.sh@10 -- # set +x 00:20:16.924 05:17:35 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:16.924 05:17:35 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:17.183 [2024-07-26 05:17:36.126148] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.183 05:17:36 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:17.183 05:17:36 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.183 05:17:36 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:17.441 05:17:36 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:17.441 05:17:36 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:17.441 05:17:36 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:17.441 05:17:36 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:17.441 05:17:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:17.441 05:17:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:17.441 05:17:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:17.441 05:17:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:17.441 05:17:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:17.441 05:17:36 -- bdev/nbd_common.sh@12 -- # local i 00:20:17.441 05:17:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:17.441 05:17:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:17.441 05:17:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:17.700 [2024-07-26 05:17:36.618109] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:20:17.700 /dev/nbd0 00:20:17.700 05:17:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:17.700 05:17:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:17.700 05:17:36 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:17.700 05:17:36 -- common/autotest_common.sh@857 -- # local i 00:20:17.700 05:17:36 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:17.700 05:17:36 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:17.700 05:17:36 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:17.700 05:17:36 -- common/autotest_common.sh@861 -- # break 00:20:17.700 05:17:36 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:17.700 05:17:36 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:17.700 05:17:36 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:17.700 1+0 records in 00:20:17.700 1+0 records out 00:20:17.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033416 s, 12.3 MB/s 00:20:17.700 05:17:36 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.700 05:17:36 -- common/autotest_common.sh@874 -- # size=4096 00:20:17.700 05:17:36 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.700 05:17:36 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:17.700 05:17:36 -- common/autotest_common.sh@877 -- # return 0 00:20:17.700 05:17:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:17.700 05:17:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:17.700 05:17:36 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:17.700 05:17:36 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:17.700 05:17:36 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:22.967 63488+0 records in 00:20:22.967 63488+0 records out 00:20:22.967 32505856 bytes (33 MB, 31 MiB) copied, 5.26494 s, 6.2 MB/s 00:20:22.967 05:17:41 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:22.967 05:17:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:22.967 05:17:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:22.967 05:17:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:22.967 05:17:41 -- bdev/nbd_common.sh@51 -- # local i 00:20:22.967 05:17:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:22.967 05:17:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:23.225 05:17:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:23.225 05:17:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:23.225 05:17:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:23.225 05:17:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:23.225 05:17:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.225 05:17:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:23.225 [2024-07-26 05:17:42.172040] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.225 05:17:42 -- bdev/nbd_common.sh@41 -- # break 00:20:23.225 05:17:42 -- bdev/nbd_common.sh@45 -- # return 0 00:20:23.225 05:17:42 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:23.483 [2024-07-26 05:17:42.344467] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.483 "name": "raid_bdev1", 00:20:23.483 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:23.483 "strip_size_kb": 0, 00:20:23.483 "state": "online", 00:20:23.483 "raid_level": "raid1", 00:20:23.483 "superblock": true, 00:20:23.483 "num_base_bdevs": 2, 00:20:23.483 "num_base_bdevs_discovered": 1, 00:20:23.483 "num_base_bdevs_operational": 1, 00:20:23.483 "base_bdevs_list": [ 00:20:23.483 { 00:20:23.483 "name": null, 00:20:23.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.483 "is_configured": false, 00:20:23.483 "data_offset": 2048, 00:20:23.483 "data_size": 63488 00:20:23.483 }, 00:20:23.483 { 00:20:23.483 "name": "BaseBdev2", 00:20:23.483 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:23.483 "is_configured": true, 00:20:23.483 "data_offset": 2048, 00:20:23.483 "data_size": 63488 00:20:23.483 } 00:20:23.483 ] 00:20:23.483 }' 00:20:23.483 05:17:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.483 05:17:42 -- common/autotest_common.sh@10 -- # set +x 00:20:24.050 05:17:42 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:24.050 [2024-07-26 05:17:43.040715] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:24.050 [2024-07-26 05:17:43.040990] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.050 [2024-07-26 05:17:43.052616] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2c10 00:20:24.050 [2024-07-26 05:17:43.054613] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:24.050 05:17:43 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:24.983 05:17:44 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:24.983 05:17:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:24.983 05:17:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:24.983 05:17:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:24.983 05:17:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:24.983 05:17:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.983 05:17:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.241 05:17:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:25.241 "name": "raid_bdev1", 00:20:25.241 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:25.241 "strip_size_kb": 0, 00:20:25.241 "state": "online", 00:20:25.241 "raid_level": "raid1", 00:20:25.241 "superblock": true, 00:20:25.241 "num_base_bdevs": 2, 00:20:25.241 "num_base_bdevs_discovered": 2, 00:20:25.241 "num_base_bdevs_operational": 2, 00:20:25.241 "process": { 00:20:25.241 "type": "rebuild", 00:20:25.241 "target": "spare", 00:20:25.241 "progress": { 00:20:25.241 "blocks": 24576, 00:20:25.241 "percent": 38 00:20:25.241 } 00:20:25.241 }, 00:20:25.241 "base_bdevs_list": [ 00:20:25.241 { 00:20:25.241 "name": "spare", 00:20:25.241 "uuid": "3b9d2ba4-9c67-5956-ba48-990e25b545c1", 00:20:25.241 "is_configured": true, 00:20:25.241 "data_offset": 2048, 00:20:25.241 "data_size": 63488 00:20:25.241 }, 00:20:25.241 { 00:20:25.241 "name": "BaseBdev2", 00:20:25.241 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:25.241 "is_configured": true, 00:20:25.241 "data_offset": 2048, 00:20:25.241 "data_size": 63488 00:20:25.241 } 00:20:25.241 ] 00:20:25.241 }' 00:20:25.241 05:17:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:25.241 05:17:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.241 05:17:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:25.241 05:17:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.241 05:17:44 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:25.499 [2024-07-26 05:17:44.572780] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.757 [2024-07-26 05:17:44.661943] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:25.757 [2024-07-26 05:17:44.662039] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.757 05:17:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.015 05:17:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.015 "name": "raid_bdev1", 00:20:26.015 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:26.015 "strip_size_kb": 0, 00:20:26.015 "state": "online", 00:20:26.015 "raid_level": "raid1", 00:20:26.015 "superblock": true, 00:20:26.015 "num_base_bdevs": 2, 00:20:26.015 "num_base_bdevs_discovered": 1, 00:20:26.015 "num_base_bdevs_operational": 1, 00:20:26.015 "base_bdevs_list": [ 00:20:26.015 { 00:20:26.015 "name": null, 00:20:26.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.015 "is_configured": false, 00:20:26.015 "data_offset": 2048, 00:20:26.015 "data_size": 63488 00:20:26.015 }, 00:20:26.015 { 00:20:26.015 "name": "BaseBdev2", 00:20:26.015 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:26.015 "is_configured": true, 00:20:26.015 "data_offset": 2048, 00:20:26.015 "data_size": 63488 00:20:26.015 } 00:20:26.015 ] 00:20:26.015 }' 00:20:26.015 05:17:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.015 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:20:26.273 05:17:45 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.273 05:17:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:26.273 05:17:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:26.273 05:17:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:26.273 05:17:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:26.273 05:17:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.273 05:17:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.531 05:17:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:26.531 "name": "raid_bdev1", 00:20:26.531 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:26.531 "strip_size_kb": 0, 00:20:26.531 "state": "online", 00:20:26.531 "raid_level": "raid1", 00:20:26.531 "superblock": true, 00:20:26.531 "num_base_bdevs": 2, 00:20:26.531 "num_base_bdevs_discovered": 1, 00:20:26.531 "num_base_bdevs_operational": 1, 00:20:26.531 "base_bdevs_list": [ 00:20:26.531 { 00:20:26.531 "name": null, 00:20:26.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.531 "is_configured": false, 00:20:26.531 "data_offset": 2048, 00:20:26.531 "data_size": 63488 00:20:26.531 }, 00:20:26.531 { 00:20:26.531 "name": "BaseBdev2", 00:20:26.531 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:26.531 "is_configured": true, 00:20:26.531 "data_offset": 2048, 00:20:26.531 "data_size": 63488 00:20:26.531 } 00:20:26.532 ] 00:20:26.532 }' 00:20:26.532 05:17:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:26.532 05:17:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:26.532 05:17:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:26.532 05:17:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:26.532 05:17:45 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:26.790 [2024-07-26 05:17:45.761305] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:26.790 [2024-07-26 05:17:45.761351] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.790 [2024-07-26 05:17:45.771983] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2ce0 00:20:26.790 [2024-07-26 05:17:45.773791] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:26.790 05:17:45 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:27.725 05:17:46 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.725 05:17:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:27.725 05:17:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:27.725 05:17:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:27.726 05:17:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:27.726 05:17:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.726 05:17:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.984 "name": "raid_bdev1", 00:20:27.984 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:27.984 "strip_size_kb": 0, 00:20:27.984 "state": "online", 00:20:27.984 "raid_level": "raid1", 00:20:27.984 "superblock": true, 00:20:27.984 "num_base_bdevs": 2, 00:20:27.984 "num_base_bdevs_discovered": 2, 00:20:27.984 "num_base_bdevs_operational": 2, 00:20:27.984 "process": { 00:20:27.984 "type": "rebuild", 00:20:27.984 "target": "spare", 00:20:27.984 "progress": { 00:20:27.984 "blocks": 24576, 00:20:27.984 "percent": 38 00:20:27.984 } 00:20:27.984 }, 00:20:27.984 "base_bdevs_list": [ 00:20:27.984 { 00:20:27.984 "name": "spare", 00:20:27.984 "uuid": "3b9d2ba4-9c67-5956-ba48-990e25b545c1", 00:20:27.984 "is_configured": true, 00:20:27.984 "data_offset": 2048, 00:20:27.984 "data_size": 63488 00:20:27.984 }, 00:20:27.984 { 00:20:27.984 "name": "BaseBdev2", 00:20:27.984 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:27.984 "is_configured": true, 00:20:27.984 "data_offset": 2048, 00:20:27.984 "data_size": 63488 00:20:27.984 } 00:20:27.984 ] 00:20:27.984 }' 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:27.984 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@657 -- # local timeout=374 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.984 05:17:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.243 05:17:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:28.243 "name": "raid_bdev1", 00:20:28.243 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:28.243 "strip_size_kb": 0, 00:20:28.243 "state": "online", 00:20:28.243 "raid_level": "raid1", 00:20:28.243 "superblock": true, 00:20:28.243 "num_base_bdevs": 2, 00:20:28.243 "num_base_bdevs_discovered": 2, 00:20:28.243 "num_base_bdevs_operational": 2, 00:20:28.243 "process": { 00:20:28.243 "type": "rebuild", 00:20:28.243 "target": "spare", 00:20:28.243 "progress": { 00:20:28.243 "blocks": 30720, 00:20:28.243 "percent": 48 00:20:28.243 } 00:20:28.243 }, 00:20:28.243 "base_bdevs_list": [ 00:20:28.243 { 00:20:28.243 "name": "spare", 00:20:28.243 "uuid": "3b9d2ba4-9c67-5956-ba48-990e25b545c1", 00:20:28.243 "is_configured": true, 00:20:28.243 "data_offset": 2048, 00:20:28.243 "data_size": 63488 00:20:28.243 }, 00:20:28.243 { 00:20:28.243 "name": "BaseBdev2", 00:20:28.243 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:28.243 "is_configured": true, 00:20:28.243 "data_offset": 2048, 00:20:28.243 "data_size": 63488 00:20:28.243 } 00:20:28.243 ] 00:20:28.243 }' 00:20:28.243 05:17:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:28.243 05:17:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.243 05:17:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:28.243 05:17:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.243 05:17:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:29.618 05:17:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:29.619 "name": "raid_bdev1", 00:20:29.619 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:29.619 "strip_size_kb": 0, 00:20:29.619 "state": "online", 00:20:29.619 "raid_level": "raid1", 00:20:29.619 "superblock": true, 00:20:29.619 "num_base_bdevs": 2, 00:20:29.619 "num_base_bdevs_discovered": 2, 00:20:29.619 "num_base_bdevs_operational": 2, 00:20:29.619 "process": { 00:20:29.619 "type": "rebuild", 00:20:29.619 "target": "spare", 00:20:29.619 "progress": { 00:20:29.619 "blocks": 55296, 00:20:29.619 "percent": 87 00:20:29.619 } 00:20:29.619 }, 00:20:29.619 "base_bdevs_list": [ 00:20:29.619 { 00:20:29.619 "name": "spare", 00:20:29.619 "uuid": "3b9d2ba4-9c67-5956-ba48-990e25b545c1", 00:20:29.619 "is_configured": true, 00:20:29.619 "data_offset": 2048, 00:20:29.619 "data_size": 63488 00:20:29.619 }, 00:20:29.619 { 00:20:29.619 "name": "BaseBdev2", 00:20:29.619 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:29.619 "is_configured": true, 00:20:29.619 "data_offset": 2048, 00:20:29.619 "data_size": 63488 00:20:29.619 } 00:20:29.619 ] 00:20:29.619 }' 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.619 05:17:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:29.877 [2024-07-26 05:17:48.887489] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:29.877 [2024-07-26 05:17:48.887557] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:29.877 [2024-07-26 05:17:48.887672] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:30.814 "name": "raid_bdev1", 00:20:30.814 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:30.814 "strip_size_kb": 0, 00:20:30.814 "state": "online", 00:20:30.814 "raid_level": "raid1", 00:20:30.814 "superblock": true, 00:20:30.814 "num_base_bdevs": 2, 00:20:30.814 "num_base_bdevs_discovered": 2, 00:20:30.814 "num_base_bdevs_operational": 2, 00:20:30.814 "base_bdevs_list": [ 00:20:30.814 { 00:20:30.814 "name": "spare", 00:20:30.814 "uuid": "3b9d2ba4-9c67-5956-ba48-990e25b545c1", 00:20:30.814 "is_configured": true, 00:20:30.814 "data_offset": 2048, 00:20:30.814 "data_size": 63488 00:20:30.814 }, 00:20:30.814 { 00:20:30.814 "name": "BaseBdev2", 00:20:30.814 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:30.814 "is_configured": true, 00:20:30.814 "data_offset": 2048, 00:20:30.814 "data_size": 63488 00:20:30.814 } 00:20:30.814 ] 00:20:30.814 }' 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@660 -- # break 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.814 05:17:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:31.073 "name": "raid_bdev1", 00:20:31.073 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:31.073 "strip_size_kb": 0, 00:20:31.073 "state": "online", 00:20:31.073 "raid_level": "raid1", 00:20:31.073 "superblock": true, 00:20:31.073 "num_base_bdevs": 2, 00:20:31.073 "num_base_bdevs_discovered": 2, 00:20:31.073 "num_base_bdevs_operational": 2, 00:20:31.073 "base_bdevs_list": [ 00:20:31.073 { 00:20:31.073 "name": "spare", 00:20:31.073 "uuid": "3b9d2ba4-9c67-5956-ba48-990e25b545c1", 00:20:31.073 "is_configured": true, 00:20:31.073 "data_offset": 2048, 00:20:31.073 "data_size": 63488 00:20:31.073 }, 00:20:31.073 { 00:20:31.073 "name": "BaseBdev2", 00:20:31.073 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:31.073 "is_configured": true, 00:20:31.073 "data_offset": 2048, 00:20:31.073 "data_size": 63488 00:20:31.073 } 00:20:31.073 ] 00:20:31.073 }' 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.073 05:17:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.332 05:17:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:31.332 "name": "raid_bdev1", 00:20:31.332 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:31.332 "strip_size_kb": 0, 00:20:31.332 "state": "online", 00:20:31.332 "raid_level": "raid1", 00:20:31.332 "superblock": true, 00:20:31.332 "num_base_bdevs": 2, 00:20:31.332 "num_base_bdevs_discovered": 2, 00:20:31.332 "num_base_bdevs_operational": 2, 00:20:31.332 "base_bdevs_list": [ 00:20:31.332 { 00:20:31.332 "name": "spare", 00:20:31.332 "uuid": "3b9d2ba4-9c67-5956-ba48-990e25b545c1", 00:20:31.332 "is_configured": true, 00:20:31.332 "data_offset": 2048, 00:20:31.332 "data_size": 63488 00:20:31.332 }, 00:20:31.332 { 00:20:31.332 "name": "BaseBdev2", 00:20:31.332 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:31.332 "is_configured": true, 00:20:31.332 "data_offset": 2048, 00:20:31.332 "data_size": 63488 00:20:31.332 } 00:20:31.332 ] 00:20:31.332 }' 00:20:31.332 05:17:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:31.332 05:17:50 -- common/autotest_common.sh@10 -- # set +x 00:20:31.591 05:17:50 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:31.850 [2024-07-26 05:17:50.876262] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:31.850 [2024-07-26 05:17:50.876452] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:31.850 [2024-07-26 05:17:50.876569] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.850 [2024-07-26 05:17:50.876649] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:31.850 [2024-07-26 05:17:50.876664] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:20:31.850 05:17:50 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.850 05:17:50 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:32.109 05:17:51 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:32.110 05:17:51 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:32.110 05:17:51 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:32.110 05:17:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:32.110 05:17:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:32.110 05:17:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:32.110 05:17:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:32.110 05:17:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:32.110 05:17:51 -- bdev/nbd_common.sh@12 -- # local i 00:20:32.110 05:17:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:32.110 05:17:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:32.110 05:17:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:32.369 /dev/nbd0 00:20:32.369 05:17:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:32.369 05:17:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:32.369 05:17:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:32.369 05:17:51 -- common/autotest_common.sh@857 -- # local i 00:20:32.369 05:17:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:32.369 05:17:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:32.369 05:17:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:32.369 05:17:51 -- common/autotest_common.sh@861 -- # break 00:20:32.369 05:17:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:32.369 05:17:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:32.369 05:17:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:32.369 1+0 records in 00:20:32.369 1+0 records out 00:20:32.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278249 s, 14.7 MB/s 00:20:32.369 05:17:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.369 05:17:51 -- common/autotest_common.sh@874 -- # size=4096 00:20:32.369 05:17:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.369 05:17:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:32.369 05:17:51 -- common/autotest_common.sh@877 -- # return 0 00:20:32.369 05:17:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:32.369 05:17:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:32.369 05:17:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:32.628 /dev/nbd1 00:20:32.628 05:17:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:32.628 05:17:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:32.629 05:17:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:32.629 05:17:51 -- common/autotest_common.sh@857 -- # local i 00:20:32.629 05:17:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:32.629 05:17:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:32.629 05:17:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:32.629 05:17:51 -- common/autotest_common.sh@861 -- # break 00:20:32.629 05:17:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:32.629 05:17:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:32.629 05:17:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:32.629 1+0 records in 00:20:32.629 1+0 records out 00:20:32.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0026926 s, 1.5 MB/s 00:20:32.629 05:17:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.629 05:17:51 -- common/autotest_common.sh@874 -- # size=4096 00:20:32.629 05:17:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.629 05:17:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:32.629 05:17:51 -- common/autotest_common.sh@877 -- # return 0 00:20:32.629 05:17:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:32.629 05:17:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:32.629 05:17:51 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:32.629 05:17:51 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:32.629 05:17:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:32.629 05:17:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:32.629 05:17:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:32.629 05:17:51 -- bdev/nbd_common.sh@51 -- # local i 00:20:32.629 05:17:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.629 05:17:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:32.887 05:17:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:32.887 05:17:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:32.887 05:17:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:32.887 05:17:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.887 05:17:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.887 05:17:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:32.887 05:17:51 -- bdev/nbd_common.sh@41 -- # break 00:20:32.887 05:17:51 -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.887 05:17:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.887 05:17:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:33.146 05:17:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:33.147 05:17:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:33.147 05:17:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:33.147 05:17:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:33.147 05:17:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:33.147 05:17:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:33.147 05:17:52 -- bdev/nbd_common.sh@41 -- # break 00:20:33.147 05:17:52 -- bdev/nbd_common.sh@45 -- # return 0 00:20:33.147 05:17:52 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:33.147 05:17:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:33.147 05:17:52 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:33.147 05:17:52 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:33.406 05:17:52 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:33.663 [2024-07-26 05:17:52.714603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:33.663 [2024-07-26 05:17:52.715243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.663 [2024-07-26 05:17:52.715389] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:20:33.663 [2024-07-26 05:17:52.715474] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.663 [2024-07-26 05:17:52.717512] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.663 [2024-07-26 05:17:52.717739] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:33.663 [2024-07-26 05:17:52.717959] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:33.663 [2024-07-26 05:17:52.718155] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:33.663 BaseBdev1 00:20:33.663 05:17:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:33.663 05:17:52 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:33.663 05:17:52 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:33.921 05:17:52 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:34.180 [2024-07-26 05:17:53.146724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:34.180 [2024-07-26 05:17:53.147203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.180 [2024-07-26 05:17:53.147488] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:20:34.180 [2024-07-26 05:17:53.147696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.180 [2024-07-26 05:17:53.148423] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.180 [2024-07-26 05:17:53.148553] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:34.180 [2024-07-26 05:17:53.148718] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:34.180 [2024-07-26 05:17:53.148738] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:34.180 [2024-07-26 05:17:53.148750] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.180 [2024-07-26 05:17:53.148774] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state configuring 00:20:34.180 [2024-07-26 05:17:53.148840] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:34.180 BaseBdev2 00:20:34.180 05:17:53 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:34.438 05:17:53 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:34.696 [2024-07-26 05:17:53.562878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:34.696 [2024-07-26 05:17:53.563209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.696 [2024-07-26 05:17:53.563452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:20:34.696 [2024-07-26 05:17:53.563589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.696 [2024-07-26 05:17:53.564262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.696 [2024-07-26 05:17:53.564576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:34.696 [2024-07-26 05:17:53.564863] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:34.696 [2024-07-26 05:17:53.565029] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:34.696 spare 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.696 05:17:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.696 [2024-07-26 05:17:53.665244] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:20:34.696 [2024-07-26 05:17:53.665277] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:34.696 [2024-07-26 05:17:53.665385] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1390 00:20:34.696 [2024-07-26 05:17:53.665714] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:20:34.696 [2024-07-26 05:17:53.665730] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:20:34.696 [2024-07-26 05:17:53.665857] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.955 05:17:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:34.955 "name": "raid_bdev1", 00:20:34.955 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:34.955 "strip_size_kb": 0, 00:20:34.955 "state": "online", 00:20:34.955 "raid_level": "raid1", 00:20:34.955 "superblock": true, 00:20:34.955 "num_base_bdevs": 2, 00:20:34.955 "num_base_bdevs_discovered": 2, 00:20:34.955 "num_base_bdevs_operational": 2, 00:20:34.955 "base_bdevs_list": [ 00:20:34.955 { 00:20:34.955 "name": "spare", 00:20:34.955 "uuid": "3b9d2ba4-9c67-5956-ba48-990e25b545c1", 00:20:34.955 "is_configured": true, 00:20:34.955 "data_offset": 2048, 00:20:34.955 "data_size": 63488 00:20:34.955 }, 00:20:34.955 { 00:20:34.955 "name": "BaseBdev2", 00:20:34.955 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:34.955 "is_configured": true, 00:20:34.955 "data_offset": 2048, 00:20:34.955 "data_size": 63488 00:20:34.955 } 00:20:34.955 ] 00:20:34.955 }' 00:20:34.955 05:17:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:34.955 05:17:53 -- common/autotest_common.sh@10 -- # set +x 00:20:35.213 05:17:54 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.213 05:17:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:35.213 05:17:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:35.213 05:17:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:35.213 05:17:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:35.213 05:17:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.213 05:17:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.472 05:17:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:35.472 "name": "raid_bdev1", 00:20:35.472 "uuid": "fb22dd5f-e149-4402-9b2c-fc03af69bf5e", 00:20:35.472 "strip_size_kb": 0, 00:20:35.472 "state": "online", 00:20:35.472 "raid_level": "raid1", 00:20:35.472 "superblock": true, 00:20:35.472 "num_base_bdevs": 2, 00:20:35.472 "num_base_bdevs_discovered": 2, 00:20:35.472 "num_base_bdevs_operational": 2, 00:20:35.472 "base_bdevs_list": [ 00:20:35.472 { 00:20:35.472 "name": "spare", 00:20:35.472 "uuid": "3b9d2ba4-9c67-5956-ba48-990e25b545c1", 00:20:35.472 "is_configured": true, 00:20:35.472 "data_offset": 2048, 00:20:35.472 "data_size": 63488 00:20:35.472 }, 00:20:35.472 { 00:20:35.472 "name": "BaseBdev2", 00:20:35.472 "uuid": "6d617020-5281-5701-9798-fbc9029a4ba9", 00:20:35.472 "is_configured": true, 00:20:35.472 "data_offset": 2048, 00:20:35.472 "data_size": 63488 00:20:35.472 } 00:20:35.472 ] 00:20:35.472 }' 00:20:35.472 05:17:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:35.472 05:17:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:35.472 05:17:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:35.472 05:17:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:35.472 05:17:54 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:35.472 05:17:54 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.730 05:17:54 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.730 05:17:54 -- bdev/bdev_raid.sh@709 -- # killprocess 78620 00:20:35.730 05:17:54 -- common/autotest_common.sh@926 -- # '[' -z 78620 ']' 00:20:35.730 05:17:54 -- common/autotest_common.sh@930 -- # kill -0 78620 00:20:35.730 05:17:54 -- common/autotest_common.sh@931 -- # uname 00:20:35.730 05:17:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:35.730 05:17:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78620 00:20:35.730 killing process with pid 78620 00:20:35.730 Received shutdown signal, test time was about 60.000000 seconds 00:20:35.730 00:20:35.730 Latency(us) 00:20:35.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.730 =================================================================================================================== 00:20:35.730 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.730 05:17:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:35.730 05:17:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:35.730 05:17:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78620' 00:20:35.730 05:17:54 -- common/autotest_common.sh@945 -- # kill 78620 00:20:35.730 05:17:54 -- common/autotest_common.sh@950 -- # wait 78620 00:20:35.730 [2024-07-26 05:17:54.727786] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:35.730 [2024-07-26 05:17:54.727880] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:35.730 [2024-07-26 05:17:54.727975] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:35.730 [2024-07-26 05:17:54.728001] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:20:35.989 [2024-07-26 05:17:54.916835] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:36.927 00:20:36.927 real 0m23.055s 00:20:36.927 user 0m31.070s 00:20:36.927 sys 0m4.302s 00:20:36.927 ************************************ 00:20:36.927 END TEST raid_rebuild_test_sb 00:20:36.927 ************************************ 00:20:36.927 05:17:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:36.927 05:17:55 -- common/autotest_common.sh@10 -- # set +x 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:20:36.927 05:17:55 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:36.927 05:17:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:36.927 05:17:55 -- common/autotest_common.sh@10 -- # set +x 00:20:36.927 ************************************ 00:20:36.927 START TEST raid_rebuild_test_io 00:20:36.927 ************************************ 00:20:36.927 05:17:55 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@544 -- # raid_pid=79194 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@545 -- # waitforlisten 79194 /var/tmp/spdk-raid.sock 00:20:36.927 05:17:55 -- common/autotest_common.sh@819 -- # '[' -z 79194 ']' 00:20:36.927 05:17:55 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:36.927 05:17:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:36.927 05:17:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:36.927 05:17:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:36.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:36.927 05:17:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:36.927 05:17:55 -- common/autotest_common.sh@10 -- # set +x 00:20:36.927 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:36.927 Zero copy mechanism will not be used. 00:20:36.927 [2024-07-26 05:17:55.966158] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:36.927 [2024-07-26 05:17:55.966333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79194 ] 00:20:37.186 [2024-07-26 05:17:56.136390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.186 [2024-07-26 05:17:56.284358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.445 [2024-07-26 05:17:56.427554] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:38.013 05:17:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:38.013 05:17:56 -- common/autotest_common.sh@852 -- # return 0 00:20:38.013 05:17:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:38.013 05:17:56 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:38.013 05:17:56 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:38.013 BaseBdev1 00:20:38.013 05:17:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:38.013 05:17:57 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:38.013 05:17:57 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:38.272 BaseBdev2 00:20:38.272 05:17:57 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:38.532 spare_malloc 00:20:38.532 05:17:57 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:38.790 spare_delay 00:20:38.790 05:17:57 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:39.049 [2024-07-26 05:17:57.933178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:39.049 [2024-07-26 05:17:57.933255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.049 [2024-07-26 05:17:57.933287] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:20:39.049 [2024-07-26 05:17:57.933303] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.049 [2024-07-26 05:17:57.935570] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.049 [2024-07-26 05:17:57.935630] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:39.049 spare 00:20:39.049 05:17:57 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:39.049 [2024-07-26 05:17:58.121277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:39.049 [2024-07-26 05:17:58.123078] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:39.049 [2024-07-26 05:17:58.123162] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:20:39.049 [2024-07-26 05:17:58.123182] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:39.049 [2024-07-26 05:17:58.123296] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:20:39.049 [2024-07-26 05:17:58.123617] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:20:39.049 [2024-07-26 05:17:58.123633] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:20:39.049 [2024-07-26 05:17:58.123782] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.049 05:17:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.308 05:17:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:39.308 "name": "raid_bdev1", 00:20:39.308 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:39.308 "strip_size_kb": 0, 00:20:39.308 "state": "online", 00:20:39.308 "raid_level": "raid1", 00:20:39.308 "superblock": false, 00:20:39.308 "num_base_bdevs": 2, 00:20:39.308 "num_base_bdevs_discovered": 2, 00:20:39.308 "num_base_bdevs_operational": 2, 00:20:39.308 "base_bdevs_list": [ 00:20:39.308 { 00:20:39.308 "name": "BaseBdev1", 00:20:39.308 "uuid": "a4044c25-4166-4754-8a8c-f073c47a17a4", 00:20:39.308 "is_configured": true, 00:20:39.308 "data_offset": 0, 00:20:39.308 "data_size": 65536 00:20:39.308 }, 00:20:39.308 { 00:20:39.308 "name": "BaseBdev2", 00:20:39.308 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:39.308 "is_configured": true, 00:20:39.308 "data_offset": 0, 00:20:39.308 "data_size": 65536 00:20:39.308 } 00:20:39.308 ] 00:20:39.308 }' 00:20:39.308 05:17:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:39.308 05:17:58 -- common/autotest_common.sh@10 -- # set +x 00:20:39.578 05:17:58 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:39.578 05:17:58 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:39.851 [2024-07-26 05:17:58.821576] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:39.851 05:17:58 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:39.851 05:17:58 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.851 05:17:58 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:40.109 05:17:59 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:40.109 05:17:59 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:40.109 05:17:59 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:40.109 05:17:59 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:40.109 [2024-07-26 05:17:59.187659] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:20:40.109 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:40.109 Zero copy mechanism will not be used. 00:20:40.109 Running I/O for 60 seconds... 00:20:40.368 [2024-07-26 05:17:59.268217] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:40.369 [2024-07-26 05:17:59.280211] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.369 05:17:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.627 05:17:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.627 "name": "raid_bdev1", 00:20:40.627 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:40.627 "strip_size_kb": 0, 00:20:40.627 "state": "online", 00:20:40.627 "raid_level": "raid1", 00:20:40.627 "superblock": false, 00:20:40.627 "num_base_bdevs": 2, 00:20:40.627 "num_base_bdevs_discovered": 1, 00:20:40.627 "num_base_bdevs_operational": 1, 00:20:40.627 "base_bdevs_list": [ 00:20:40.627 { 00:20:40.627 "name": null, 00:20:40.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.627 "is_configured": false, 00:20:40.627 "data_offset": 0, 00:20:40.627 "data_size": 65536 00:20:40.627 }, 00:20:40.627 { 00:20:40.627 "name": "BaseBdev2", 00:20:40.627 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:40.627 "is_configured": true, 00:20:40.627 "data_offset": 0, 00:20:40.627 "data_size": 65536 00:20:40.627 } 00:20:40.627 ] 00:20:40.627 }' 00:20:40.627 05:17:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.627 05:17:59 -- common/autotest_common.sh@10 -- # set +x 00:20:40.886 05:17:59 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:41.145 [2024-07-26 05:18:00.083809] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:41.145 [2024-07-26 05:18:00.083865] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:41.145 [2024-07-26 05:18:00.124175] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:20:41.145 [2024-07-26 05:18:00.126044] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:41.145 05:18:00 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:41.145 [2024-07-26 05:18:00.241453] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:41.145 [2024-07-26 05:18:00.241841] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:41.403 [2024-07-26 05:18:00.470259] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:41.403 [2024-07-26 05:18:00.470534] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:41.662 [2024-07-26 05:18:00.701495] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:41.662 [2024-07-26 05:18:00.702053] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:41.921 [2024-07-26 05:18:00.930360] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:41.921 [2024-07-26 05:18:00.930820] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:42.179 05:18:01 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.179 05:18:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:42.179 05:18:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:42.179 05:18:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:42.179 05:18:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:42.179 05:18:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.179 05:18:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.179 [2024-07-26 05:18:01.154743] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:42.179 [2024-07-26 05:18:01.155120] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:42.437 [2024-07-26 05:18:01.364341] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:42.437 05:18:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:42.437 "name": "raid_bdev1", 00:20:42.437 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:42.437 "strip_size_kb": 0, 00:20:42.437 "state": "online", 00:20:42.437 "raid_level": "raid1", 00:20:42.437 "superblock": false, 00:20:42.437 "num_base_bdevs": 2, 00:20:42.437 "num_base_bdevs_discovered": 2, 00:20:42.437 "num_base_bdevs_operational": 2, 00:20:42.437 "process": { 00:20:42.437 "type": "rebuild", 00:20:42.437 "target": "spare", 00:20:42.437 "progress": { 00:20:42.437 "blocks": 14336, 00:20:42.437 "percent": 21 00:20:42.437 } 00:20:42.437 }, 00:20:42.437 "base_bdevs_list": [ 00:20:42.437 { 00:20:42.437 "name": "spare", 00:20:42.437 "uuid": "b3ba7a8c-b1a6-5631-917f-5164a6136b79", 00:20:42.437 "is_configured": true, 00:20:42.437 "data_offset": 0, 00:20:42.437 "data_size": 65536 00:20:42.437 }, 00:20:42.437 { 00:20:42.437 "name": "BaseBdev2", 00:20:42.437 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:42.437 "is_configured": true, 00:20:42.437 "data_offset": 0, 00:20:42.437 "data_size": 65536 00:20:42.437 } 00:20:42.437 ] 00:20:42.438 }' 00:20:42.438 05:18:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:42.438 [2024-07-26 05:18:01.379462] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:42.438 05:18:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:42.438 05:18:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:42.438 05:18:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:42.438 05:18:01 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:42.696 [2024-07-26 05:18:01.595542] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:42.696 [2024-07-26 05:18:01.628482] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:42.696 [2024-07-26 05:18:01.636605] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.696 [2024-07-26 05:18:01.667909] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.696 05:18:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.954 05:18:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:42.954 "name": "raid_bdev1", 00:20:42.954 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:42.954 "strip_size_kb": 0, 00:20:42.954 "state": "online", 00:20:42.954 "raid_level": "raid1", 00:20:42.954 "superblock": false, 00:20:42.954 "num_base_bdevs": 2, 00:20:42.954 "num_base_bdevs_discovered": 1, 00:20:42.954 "num_base_bdevs_operational": 1, 00:20:42.954 "base_bdevs_list": [ 00:20:42.954 { 00:20:42.954 "name": null, 00:20:42.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.954 "is_configured": false, 00:20:42.954 "data_offset": 0, 00:20:42.954 "data_size": 65536 00:20:42.954 }, 00:20:42.954 { 00:20:42.954 "name": "BaseBdev2", 00:20:42.954 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:42.954 "is_configured": true, 00:20:42.954 "data_offset": 0, 00:20:42.954 "data_size": 65536 00:20:42.954 } 00:20:42.954 ] 00:20:42.954 }' 00:20:42.954 05:18:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:42.954 05:18:01 -- common/autotest_common.sh@10 -- # set +x 00:20:43.213 05:18:02 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:43.213 05:18:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:43.213 05:18:02 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:43.213 05:18:02 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:43.213 05:18:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:43.213 05:18:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.213 05:18:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.471 05:18:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:43.471 "name": "raid_bdev1", 00:20:43.471 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:43.471 "strip_size_kb": 0, 00:20:43.471 "state": "online", 00:20:43.471 "raid_level": "raid1", 00:20:43.471 "superblock": false, 00:20:43.471 "num_base_bdevs": 2, 00:20:43.471 "num_base_bdevs_discovered": 1, 00:20:43.471 "num_base_bdevs_operational": 1, 00:20:43.471 "base_bdevs_list": [ 00:20:43.471 { 00:20:43.471 "name": null, 00:20:43.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.471 "is_configured": false, 00:20:43.471 "data_offset": 0, 00:20:43.471 "data_size": 65536 00:20:43.471 }, 00:20:43.471 { 00:20:43.471 "name": "BaseBdev2", 00:20:43.471 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:43.471 "is_configured": true, 00:20:43.471 "data_offset": 0, 00:20:43.471 "data_size": 65536 00:20:43.471 } 00:20:43.471 ] 00:20:43.471 }' 00:20:43.471 05:18:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:43.730 05:18:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:43.730 05:18:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:43.730 05:18:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:43.730 05:18:02 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:43.730 [2024-07-26 05:18:02.781139] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:43.730 [2024-07-26 05:18:02.781184] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.730 05:18:02 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:43.730 [2024-07-26 05:18:02.834813] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:20:43.730 [2024-07-26 05:18:02.837089] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:43.989 [2024-07-26 05:18:02.966250] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:43.989 [2024-07-26 05:18:02.966887] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:44.248 [2024-07-26 05:18:03.168129] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:44.248 [2024-07-26 05:18:03.168300] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:44.812 [2024-07-26 05:18:03.618799] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:44.812 05:18:03 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:44.812 05:18:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:44.812 05:18:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:44.812 05:18:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:44.812 05:18:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:44.812 05:18:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.812 05:18:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.071 [2024-07-26 05:18:04.097355] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:45.071 05:18:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:45.071 "name": "raid_bdev1", 00:20:45.072 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:45.072 "strip_size_kb": 0, 00:20:45.072 "state": "online", 00:20:45.072 "raid_level": "raid1", 00:20:45.072 "superblock": false, 00:20:45.072 "num_base_bdevs": 2, 00:20:45.072 "num_base_bdevs_discovered": 2, 00:20:45.072 "num_base_bdevs_operational": 2, 00:20:45.072 "process": { 00:20:45.072 "type": "rebuild", 00:20:45.072 "target": "spare", 00:20:45.072 "progress": { 00:20:45.072 "blocks": 14336, 00:20:45.072 "percent": 21 00:20:45.072 } 00:20:45.072 }, 00:20:45.072 "base_bdevs_list": [ 00:20:45.072 { 00:20:45.072 "name": "spare", 00:20:45.072 "uuid": "b3ba7a8c-b1a6-5631-917f-5164a6136b79", 00:20:45.072 "is_configured": true, 00:20:45.072 "data_offset": 0, 00:20:45.072 "data_size": 65536 00:20:45.072 }, 00:20:45.072 { 00:20:45.072 "name": "BaseBdev2", 00:20:45.072 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:45.072 "is_configured": true, 00:20:45.072 "data_offset": 0, 00:20:45.072 "data_size": 65536 00:20:45.072 } 00:20:45.072 ] 00:20:45.072 }' 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@657 -- # local timeout=391 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.072 05:18:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.331 05:18:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:45.331 "name": "raid_bdev1", 00:20:45.331 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:45.331 "strip_size_kb": 0, 00:20:45.331 "state": "online", 00:20:45.331 "raid_level": "raid1", 00:20:45.331 "superblock": false, 00:20:45.331 "num_base_bdevs": 2, 00:20:45.331 "num_base_bdevs_discovered": 2, 00:20:45.331 "num_base_bdevs_operational": 2, 00:20:45.331 "process": { 00:20:45.331 "type": "rebuild", 00:20:45.331 "target": "spare", 00:20:45.331 "progress": { 00:20:45.331 "blocks": 16384, 00:20:45.331 "percent": 25 00:20:45.331 } 00:20:45.331 }, 00:20:45.331 "base_bdevs_list": [ 00:20:45.331 { 00:20:45.331 "name": "spare", 00:20:45.331 "uuid": "b3ba7a8c-b1a6-5631-917f-5164a6136b79", 00:20:45.331 "is_configured": true, 00:20:45.331 "data_offset": 0, 00:20:45.331 "data_size": 65536 00:20:45.331 }, 00:20:45.331 { 00:20:45.331 "name": "BaseBdev2", 00:20:45.331 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:45.331 "is_configured": true, 00:20:45.331 "data_offset": 0, 00:20:45.331 "data_size": 65536 00:20:45.331 } 00:20:45.331 ] 00:20:45.331 }' 00:20:45.331 05:18:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:45.331 05:18:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:45.331 05:18:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:45.331 05:18:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.331 05:18:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:45.590 [2024-07-26 05:18:04.537118] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:45.850 [2024-07-26 05:18:04.847777] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:45.850 [2024-07-26 05:18:04.948956] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:46.109 [2024-07-26 05:18:05.171614] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:46.368 05:18:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:46.368 05:18:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.368 05:18:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:46.368 05:18:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:46.368 05:18:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:46.368 05:18:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:46.368 05:18:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.368 05:18:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.627 05:18:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:46.627 "name": "raid_bdev1", 00:20:46.627 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:46.627 "strip_size_kb": 0, 00:20:46.627 "state": "online", 00:20:46.627 "raid_level": "raid1", 00:20:46.627 "superblock": false, 00:20:46.627 "num_base_bdevs": 2, 00:20:46.627 "num_base_bdevs_discovered": 2, 00:20:46.627 "num_base_bdevs_operational": 2, 00:20:46.627 "process": { 00:20:46.627 "type": "rebuild", 00:20:46.627 "target": "spare", 00:20:46.627 "progress": { 00:20:46.627 "blocks": 34816, 00:20:46.627 "percent": 53 00:20:46.627 } 00:20:46.627 }, 00:20:46.627 "base_bdevs_list": [ 00:20:46.627 { 00:20:46.627 "name": "spare", 00:20:46.627 "uuid": "b3ba7a8c-b1a6-5631-917f-5164a6136b79", 00:20:46.627 "is_configured": true, 00:20:46.627 "data_offset": 0, 00:20:46.627 "data_size": 65536 00:20:46.627 }, 00:20:46.627 { 00:20:46.627 "name": "BaseBdev2", 00:20:46.627 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:46.627 "is_configured": true, 00:20:46.627 "data_offset": 0, 00:20:46.627 "data_size": 65536 00:20:46.627 } 00:20:46.627 ] 00:20:46.627 }' 00:20:46.627 05:18:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:46.627 05:18:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:46.627 05:18:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:46.627 05:18:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.627 05:18:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:47.564 05:18:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:47.564 05:18:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.564 05:18:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:47.564 05:18:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:47.564 05:18:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:47.564 05:18:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:47.564 05:18:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.564 05:18:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.564 [2024-07-26 05:18:06.609497] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:47.564 [2024-07-26 05:18:06.609813] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:47.823 05:18:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:47.823 "name": "raid_bdev1", 00:20:47.823 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:47.823 "strip_size_kb": 0, 00:20:47.823 "state": "online", 00:20:47.823 "raid_level": "raid1", 00:20:47.823 "superblock": false, 00:20:47.823 "num_base_bdevs": 2, 00:20:47.823 "num_base_bdevs_discovered": 2, 00:20:47.823 "num_base_bdevs_operational": 2, 00:20:47.823 "process": { 00:20:47.823 "type": "rebuild", 00:20:47.823 "target": "spare", 00:20:47.823 "progress": { 00:20:47.823 "blocks": 57344, 00:20:47.823 "percent": 87 00:20:47.823 } 00:20:47.823 }, 00:20:47.823 "base_bdevs_list": [ 00:20:47.823 { 00:20:47.823 "name": "spare", 00:20:47.823 "uuid": "b3ba7a8c-b1a6-5631-917f-5164a6136b79", 00:20:47.823 "is_configured": true, 00:20:47.823 "data_offset": 0, 00:20:47.823 "data_size": 65536 00:20:47.823 }, 00:20:47.823 { 00:20:47.823 "name": "BaseBdev2", 00:20:47.823 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:47.823 "is_configured": true, 00:20:47.823 "data_offset": 0, 00:20:47.823 "data_size": 65536 00:20:47.823 } 00:20:47.823 ] 00:20:47.823 }' 00:20:47.823 05:18:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:47.823 05:18:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.823 05:18:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:47.823 05:18:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.823 05:18:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:47.823 [2024-07-26 05:18:06.824695] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:48.081 [2024-07-26 05:18:07.154993] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:48.339 [2024-07-26 05:18:07.255073] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:48.339 [2024-07-26 05:18:07.262545] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.907 05:18:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:48.907 05:18:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.907 05:18:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:48.907 05:18:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:48.907 05:18:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:48.907 05:18:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:48.907 05:18:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.907 05:18:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:49.165 "name": "raid_bdev1", 00:20:49.165 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:49.165 "strip_size_kb": 0, 00:20:49.165 "state": "online", 00:20:49.165 "raid_level": "raid1", 00:20:49.165 "superblock": false, 00:20:49.165 "num_base_bdevs": 2, 00:20:49.165 "num_base_bdevs_discovered": 2, 00:20:49.165 "num_base_bdevs_operational": 2, 00:20:49.165 "base_bdevs_list": [ 00:20:49.165 { 00:20:49.165 "name": "spare", 00:20:49.165 "uuid": "b3ba7a8c-b1a6-5631-917f-5164a6136b79", 00:20:49.165 "is_configured": true, 00:20:49.165 "data_offset": 0, 00:20:49.165 "data_size": 65536 00:20:49.165 }, 00:20:49.165 { 00:20:49.165 "name": "BaseBdev2", 00:20:49.165 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:49.165 "is_configured": true, 00:20:49.165 "data_offset": 0, 00:20:49.165 "data_size": 65536 00:20:49.165 } 00:20:49.165 ] 00:20:49.165 }' 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@660 -- # break 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.165 05:18:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.423 05:18:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:49.423 "name": "raid_bdev1", 00:20:49.423 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:49.423 "strip_size_kb": 0, 00:20:49.423 "state": "online", 00:20:49.423 "raid_level": "raid1", 00:20:49.423 "superblock": false, 00:20:49.423 "num_base_bdevs": 2, 00:20:49.423 "num_base_bdevs_discovered": 2, 00:20:49.423 "num_base_bdevs_operational": 2, 00:20:49.423 "base_bdevs_list": [ 00:20:49.423 { 00:20:49.423 "name": "spare", 00:20:49.423 "uuid": "b3ba7a8c-b1a6-5631-917f-5164a6136b79", 00:20:49.423 "is_configured": true, 00:20:49.423 "data_offset": 0, 00:20:49.423 "data_size": 65536 00:20:49.423 }, 00:20:49.423 { 00:20:49.423 "name": "BaseBdev2", 00:20:49.423 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:49.423 "is_configured": true, 00:20:49.423 "data_offset": 0, 00:20:49.423 "data_size": 65536 00:20:49.423 } 00:20:49.423 ] 00:20:49.423 }' 00:20:49.423 05:18:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.424 05:18:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.682 05:18:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:49.682 "name": "raid_bdev1", 00:20:49.682 "uuid": "0a1f7dfe-97a0-471f-a0f7-371f505b1878", 00:20:49.682 "strip_size_kb": 0, 00:20:49.682 "state": "online", 00:20:49.682 "raid_level": "raid1", 00:20:49.682 "superblock": false, 00:20:49.682 "num_base_bdevs": 2, 00:20:49.682 "num_base_bdevs_discovered": 2, 00:20:49.682 "num_base_bdevs_operational": 2, 00:20:49.682 "base_bdevs_list": [ 00:20:49.682 { 00:20:49.682 "name": "spare", 00:20:49.682 "uuid": "b3ba7a8c-b1a6-5631-917f-5164a6136b79", 00:20:49.682 "is_configured": true, 00:20:49.682 "data_offset": 0, 00:20:49.682 "data_size": 65536 00:20:49.682 }, 00:20:49.682 { 00:20:49.682 "name": "BaseBdev2", 00:20:49.682 "uuid": "d51bb216-00e5-4762-b5e7-6e718951423b", 00:20:49.682 "is_configured": true, 00:20:49.682 "data_offset": 0, 00:20:49.682 "data_size": 65536 00:20:49.682 } 00:20:49.682 ] 00:20:49.682 }' 00:20:49.682 05:18:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:49.682 05:18:08 -- common/autotest_common.sh@10 -- # set +x 00:20:49.940 05:18:08 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:50.199 [2024-07-26 05:18:09.112113] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:50.199 [2024-07-26 05:18:09.112148] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:50.199 00:20:50.199 Latency(us) 00:20:50.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.199 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:50.199 raid_bdev1 : 10.02 94.99 284.97 0.00 0.00 14600.23 260.65 113436.86 00:20:50.199 =================================================================================================================== 00:20:50.199 Total : 94.99 284.97 0.00 0.00 14600.23 260.65 113436.86 00:20:50.199 0 00:20:50.199 [2024-07-26 05:18:09.227291] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.199 [2024-07-26 05:18:09.227336] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.199 [2024-07-26 05:18:09.227410] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.199 [2024-07-26 05:18:09.227425] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:20:50.199 05:18:09 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:50.199 05:18:09 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.457 05:18:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:50.457 05:18:09 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:50.457 05:18:09 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:50.457 05:18:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:50.457 05:18:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:50.457 05:18:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:50.457 05:18:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:50.457 05:18:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:50.457 05:18:09 -- bdev/nbd_common.sh@12 -- # local i 00:20:50.457 05:18:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:50.457 05:18:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:50.457 05:18:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:50.716 /dev/nbd0 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:50.716 05:18:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:50.716 05:18:09 -- common/autotest_common.sh@857 -- # local i 00:20:50.716 05:18:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:50.716 05:18:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:50.716 05:18:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:50.716 05:18:09 -- common/autotest_common.sh@861 -- # break 00:20:50.716 05:18:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:50.716 05:18:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:50.716 05:18:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:50.716 1+0 records in 00:20:50.716 1+0 records out 00:20:50.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223467 s, 18.3 MB/s 00:20:50.716 05:18:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.716 05:18:09 -- common/autotest_common.sh@874 -- # size=4096 00:20:50.716 05:18:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.716 05:18:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:50.716 05:18:09 -- common/autotest_common.sh@877 -- # return 0 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:50.716 05:18:09 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:50.716 05:18:09 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:50.716 05:18:09 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@12 -- # local i 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:50.716 05:18:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:50.974 /dev/nbd1 00:20:50.974 05:18:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:50.974 05:18:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:50.974 05:18:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:50.974 05:18:09 -- common/autotest_common.sh@857 -- # local i 00:20:50.974 05:18:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:50.974 05:18:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:50.974 05:18:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:50.974 05:18:09 -- common/autotest_common.sh@861 -- # break 00:20:50.974 05:18:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:50.974 05:18:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:50.974 05:18:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:50.974 1+0 records in 00:20:50.974 1+0 records out 00:20:50.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002718 s, 15.1 MB/s 00:20:50.975 05:18:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.975 05:18:09 -- common/autotest_common.sh@874 -- # size=4096 00:20:50.975 05:18:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.975 05:18:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:50.975 05:18:09 -- common/autotest_common.sh@877 -- # return 0 00:20:50.975 05:18:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:50.975 05:18:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:50.975 05:18:09 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:51.233 05:18:10 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:51.233 05:18:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:51.233 05:18:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:51.233 05:18:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:51.233 05:18:10 -- bdev/nbd_common.sh@51 -- # local i 00:20:51.233 05:18:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.233 05:18:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@41 -- # break 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@45 -- # return 0 00:20:51.491 05:18:10 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@51 -- # local i 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.491 05:18:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:51.749 05:18:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:51.749 05:18:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:51.749 05:18:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:51.749 05:18:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:51.749 05:18:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:51.749 05:18:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:51.749 05:18:10 -- bdev/nbd_common.sh@41 -- # break 00:20:51.749 05:18:10 -- bdev/nbd_common.sh@45 -- # return 0 00:20:51.749 05:18:10 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:51.749 05:18:10 -- bdev/bdev_raid.sh@709 -- # killprocess 79194 00:20:51.749 05:18:10 -- common/autotest_common.sh@926 -- # '[' -z 79194 ']' 00:20:51.749 05:18:10 -- common/autotest_common.sh@930 -- # kill -0 79194 00:20:51.749 05:18:10 -- common/autotest_common.sh@931 -- # uname 00:20:51.749 05:18:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:51.749 05:18:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79194 00:20:51.749 killing process with pid 79194 00:20:51.749 Received shutdown signal, test time was about 11.525804 seconds 00:20:51.749 00:20:51.749 Latency(us) 00:20:51.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.749 =================================================================================================================== 00:20:51.749 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.749 05:18:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:51.749 05:18:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:51.749 05:18:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79194' 00:20:51.749 05:18:10 -- common/autotest_common.sh@945 -- # kill 79194 00:20:51.750 [2024-07-26 05:18:10.715570] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:51.750 05:18:10 -- common/autotest_common.sh@950 -- # wait 79194 00:20:52.008 [2024-07-26 05:18:10.869551] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:52.943 ************************************ 00:20:52.943 END TEST raid_rebuild_test_io 00:20:52.943 ************************************ 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:52.943 00:20:52.943 real 0m15.926s 00:20:52.943 user 0m22.738s 00:20:52.943 sys 0m1.900s 00:20:52.943 05:18:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:52.943 05:18:11 -- common/autotest_common.sh@10 -- # set +x 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:20:52.943 05:18:11 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:52.943 05:18:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:52.943 05:18:11 -- common/autotest_common.sh@10 -- # set +x 00:20:52.943 ************************************ 00:20:52.943 START TEST raid_rebuild_test_sb_io 00:20:52.943 ************************************ 00:20:52.943 05:18:11 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:52.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@544 -- # raid_pid=79627 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@545 -- # waitforlisten 79627 /var/tmp/spdk-raid.sock 00:20:52.943 05:18:11 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:52.943 05:18:11 -- common/autotest_common.sh@819 -- # '[' -z 79627 ']' 00:20:52.943 05:18:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:52.943 05:18:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:52.943 05:18:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:52.943 05:18:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:52.943 05:18:11 -- common/autotest_common.sh@10 -- # set +x 00:20:52.943 [2024-07-26 05:18:11.921980] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:52.943 [2024-07-26 05:18:11.922347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79627 ] 00:20:52.943 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:52.943 Zero copy mechanism will not be used. 00:20:53.202 [2024-07-26 05:18:12.071936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.202 [2024-07-26 05:18:12.231213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.463 [2024-07-26 05:18:12.375603] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.733 05:18:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:53.733 05:18:12 -- common/autotest_common.sh@852 -- # return 0 00:20:53.733 05:18:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:53.733 05:18:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:53.733 05:18:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:54.004 BaseBdev1_malloc 00:20:54.004 05:18:13 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:54.261 [2024-07-26 05:18:13.253410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:54.261 [2024-07-26 05:18:13.253501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.261 [2024-07-26 05:18:13.253541] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:20:54.261 [2024-07-26 05:18:13.253559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.261 [2024-07-26 05:18:13.255907] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.261 [2024-07-26 05:18:13.255951] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:54.261 BaseBdev1 00:20:54.261 05:18:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:54.261 05:18:13 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:54.261 05:18:13 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:54.519 BaseBdev2_malloc 00:20:54.519 05:18:13 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:54.778 [2024-07-26 05:18:13.774149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:54.778 [2024-07-26 05:18:13.774396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.778 [2024-07-26 05:18:13.774487] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:20:54.778 [2024-07-26 05:18:13.774512] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.778 [2024-07-26 05:18:13.776815] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.778 [2024-07-26 05:18:13.776875] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:54.778 BaseBdev2 00:20:54.778 05:18:13 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:55.037 spare_malloc 00:20:55.037 05:18:13 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:55.296 spare_delay 00:20:55.296 05:18:14 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:55.296 [2024-07-26 05:18:14.342300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:55.296 [2024-07-26 05:18:14.342556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.296 [2024-07-26 05:18:14.342593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:20:55.296 [2024-07-26 05:18:14.342610] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.296 [2024-07-26 05:18:14.344777] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.296 [2024-07-26 05:18:14.344822] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:55.296 spare 00:20:55.296 05:18:14 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:55.555 [2024-07-26 05:18:14.534410] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:55.555 [2024-07-26 05:18:14.536433] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.555 [2024-07-26 05:18:14.536609] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:20:55.555 [2024-07-26 05:18:14.536629] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:55.555 [2024-07-26 05:18:14.536742] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:20:55.555 [2024-07-26 05:18:14.537131] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:20:55.555 [2024-07-26 05:18:14.537147] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:20:55.555 [2024-07-26 05:18:14.537309] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.555 05:18:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.814 05:18:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.814 "name": "raid_bdev1", 00:20:55.814 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:20:55.814 "strip_size_kb": 0, 00:20:55.814 "state": "online", 00:20:55.814 "raid_level": "raid1", 00:20:55.814 "superblock": true, 00:20:55.814 "num_base_bdevs": 2, 00:20:55.814 "num_base_bdevs_discovered": 2, 00:20:55.814 "num_base_bdevs_operational": 2, 00:20:55.814 "base_bdevs_list": [ 00:20:55.814 { 00:20:55.814 "name": "BaseBdev1", 00:20:55.814 "uuid": "bf34e98e-62da-5a85-a05d-bcc626f975a8", 00:20:55.814 "is_configured": true, 00:20:55.814 "data_offset": 2048, 00:20:55.814 "data_size": 63488 00:20:55.814 }, 00:20:55.814 { 00:20:55.814 "name": "BaseBdev2", 00:20:55.814 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:20:55.814 "is_configured": true, 00:20:55.814 "data_offset": 2048, 00:20:55.814 "data_size": 63488 00:20:55.814 } 00:20:55.814 ] 00:20:55.814 }' 00:20:55.814 05:18:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.814 05:18:14 -- common/autotest_common.sh@10 -- # set +x 00:20:56.073 05:18:14 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:56.073 05:18:14 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:56.331 [2024-07-26 05:18:15.234702] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.331 05:18:15 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:56.331 05:18:15 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.331 05:18:15 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:56.331 05:18:15 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:56.331 05:18:15 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:56.331 05:18:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:56.331 05:18:15 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:56.590 [2024-07-26 05:18:15.561126] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:20:56.590 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:56.590 Zero copy mechanism will not be used. 00:20:56.590 Running I/O for 60 seconds... 00:20:56.590 [2024-07-26 05:18:15.668945] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:56.590 [2024-07-26 05:18:15.681372] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:56.849 "name": "raid_bdev1", 00:20:56.849 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:20:56.849 "strip_size_kb": 0, 00:20:56.849 "state": "online", 00:20:56.849 "raid_level": "raid1", 00:20:56.849 "superblock": true, 00:20:56.849 "num_base_bdevs": 2, 00:20:56.849 "num_base_bdevs_discovered": 1, 00:20:56.849 "num_base_bdevs_operational": 1, 00:20:56.849 "base_bdevs_list": [ 00:20:56.849 { 00:20:56.849 "name": null, 00:20:56.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.849 "is_configured": false, 00:20:56.849 "data_offset": 2048, 00:20:56.849 "data_size": 63488 00:20:56.849 }, 00:20:56.849 { 00:20:56.849 "name": "BaseBdev2", 00:20:56.849 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:20:56.849 "is_configured": true, 00:20:56.849 "data_offset": 2048, 00:20:56.849 "data_size": 63488 00:20:56.849 } 00:20:56.849 ] 00:20:56.849 }' 00:20:56.849 05:18:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:56.849 05:18:15 -- common/autotest_common.sh@10 -- # set +x 00:20:57.108 05:18:16 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:57.367 [2024-07-26 05:18:16.394496] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:57.367 [2024-07-26 05:18:16.394548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:57.367 [2024-07-26 05:18:16.427121] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:20:57.367 [2024-07-26 05:18:16.429013] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:57.367 05:18:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:57.626 [2024-07-26 05:18:16.530056] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:57.626 [2024-07-26 05:18:16.530363] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:57.626 [2024-07-26 05:18:16.664207] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:58.194 [2024-07-26 05:18:17.008049] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:58.194 [2024-07-26 05:18:17.128342] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:58.194 [2024-07-26 05:18:17.128651] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:58.453 05:18:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.453 05:18:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:58.453 05:18:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:58.453 05:18:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:58.453 05:18:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:58.453 05:18:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.453 05:18:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.453 [2024-07-26 05:18:17.444819] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:58.453 [2024-07-26 05:18:17.445202] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:58.713 [2024-07-26 05:18:17.661325] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:58.713 05:18:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:58.713 "name": "raid_bdev1", 00:20:58.713 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:20:58.713 "strip_size_kb": 0, 00:20:58.713 "state": "online", 00:20:58.713 "raid_level": "raid1", 00:20:58.713 "superblock": true, 00:20:58.713 "num_base_bdevs": 2, 00:20:58.713 "num_base_bdevs_discovered": 2, 00:20:58.713 "num_base_bdevs_operational": 2, 00:20:58.713 "process": { 00:20:58.713 "type": "rebuild", 00:20:58.713 "target": "spare", 00:20:58.713 "progress": { 00:20:58.713 "blocks": 14336, 00:20:58.713 "percent": 22 00:20:58.713 } 00:20:58.713 }, 00:20:58.713 "base_bdevs_list": [ 00:20:58.713 { 00:20:58.713 "name": "spare", 00:20:58.713 "uuid": "a2db295e-de54-574d-9993-a0084ba39dcf", 00:20:58.713 "is_configured": true, 00:20:58.713 "data_offset": 2048, 00:20:58.713 "data_size": 63488 00:20:58.713 }, 00:20:58.713 { 00:20:58.713 "name": "BaseBdev2", 00:20:58.713 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:20:58.713 "is_configured": true, 00:20:58.713 "data_offset": 2048, 00:20:58.713 "data_size": 63488 00:20:58.713 } 00:20:58.713 ] 00:20:58.713 }' 00:20:58.713 05:18:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:58.713 05:18:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.713 05:18:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:58.713 05:18:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.713 05:18:17 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:58.972 [2024-07-26 05:18:17.858976] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:58.972 [2024-07-26 05:18:18.001128] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:58.972 [2024-07-26 05:18:18.009456] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.972 [2024-07-26 05:18:18.034791] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.972 05:18:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.231 05:18:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.231 "name": "raid_bdev1", 00:20:59.231 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:20:59.231 "strip_size_kb": 0, 00:20:59.231 "state": "online", 00:20:59.231 "raid_level": "raid1", 00:20:59.231 "superblock": true, 00:20:59.231 "num_base_bdevs": 2, 00:20:59.231 "num_base_bdevs_discovered": 1, 00:20:59.231 "num_base_bdevs_operational": 1, 00:20:59.231 "base_bdevs_list": [ 00:20:59.231 { 00:20:59.231 "name": null, 00:20:59.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.232 "is_configured": false, 00:20:59.232 "data_offset": 2048, 00:20:59.232 "data_size": 63488 00:20:59.232 }, 00:20:59.232 { 00:20:59.232 "name": "BaseBdev2", 00:20:59.232 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:20:59.232 "is_configured": true, 00:20:59.232 "data_offset": 2048, 00:20:59.232 "data_size": 63488 00:20:59.232 } 00:20:59.232 ] 00:20:59.232 }' 00:20:59.232 05:18:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.232 05:18:18 -- common/autotest_common.sh@10 -- # set +x 00:20:59.491 05:18:18 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:59.491 05:18:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:59.491 05:18:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:59.491 05:18:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:59.491 05:18:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:59.491 05:18:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.491 05:18:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.750 05:18:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:59.750 "name": "raid_bdev1", 00:20:59.751 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:20:59.751 "strip_size_kb": 0, 00:20:59.751 "state": "online", 00:20:59.751 "raid_level": "raid1", 00:20:59.751 "superblock": true, 00:20:59.751 "num_base_bdevs": 2, 00:20:59.751 "num_base_bdevs_discovered": 1, 00:20:59.751 "num_base_bdevs_operational": 1, 00:20:59.751 "base_bdevs_list": [ 00:20:59.751 { 00:20:59.751 "name": null, 00:20:59.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.751 "is_configured": false, 00:20:59.751 "data_offset": 2048, 00:20:59.751 "data_size": 63488 00:20:59.751 }, 00:20:59.751 { 00:20:59.751 "name": "BaseBdev2", 00:20:59.751 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:20:59.751 "is_configured": true, 00:20:59.751 "data_offset": 2048, 00:20:59.751 "data_size": 63488 00:20:59.751 } 00:20:59.751 ] 00:20:59.751 }' 00:20:59.751 05:18:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:59.751 05:18:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:59.751 05:18:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:59.751 05:18:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:59.751 05:18:18 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:00.010 [2024-07-26 05:18:19.021620] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:00.010 [2024-07-26 05:18:19.021672] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:00.010 05:18:19 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:00.010 [2024-07-26 05:18:19.054534] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:21:00.010 [2024-07-26 05:18:19.056583] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:00.269 [2024-07-26 05:18:19.165049] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:00.269 [2024-07-26 05:18:19.165584] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:00.269 [2024-07-26 05:18:19.373339] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:00.269 [2024-07-26 05:18:19.373638] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:00.836 [2024-07-26 05:18:19.739687] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:00.836 [2024-07-26 05:18:19.865733] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:00.836 [2024-07-26 05:18:19.865905] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:01.095 05:18:20 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.095 05:18:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:01.095 05:18:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:01.095 05:18:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:01.095 05:18:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:01.095 05:18:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.095 05:18:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:01.355 "name": "raid_bdev1", 00:21:01.355 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:21:01.355 "strip_size_kb": 0, 00:21:01.355 "state": "online", 00:21:01.355 "raid_level": "raid1", 00:21:01.355 "superblock": true, 00:21:01.355 "num_base_bdevs": 2, 00:21:01.355 "num_base_bdevs_discovered": 2, 00:21:01.355 "num_base_bdevs_operational": 2, 00:21:01.355 "process": { 00:21:01.355 "type": "rebuild", 00:21:01.355 "target": "spare", 00:21:01.355 "progress": { 00:21:01.355 "blocks": 14336, 00:21:01.355 "percent": 22 00:21:01.355 } 00:21:01.355 }, 00:21:01.355 "base_bdevs_list": [ 00:21:01.355 { 00:21:01.355 "name": "spare", 00:21:01.355 "uuid": "a2db295e-de54-574d-9993-a0084ba39dcf", 00:21:01.355 "is_configured": true, 00:21:01.355 "data_offset": 2048, 00:21:01.355 "data_size": 63488 00:21:01.355 }, 00:21:01.355 { 00:21:01.355 "name": "BaseBdev2", 00:21:01.355 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:21:01.355 "is_configured": true, 00:21:01.355 "data_offset": 2048, 00:21:01.355 "data_size": 63488 00:21:01.355 } 00:21:01.355 ] 00:21:01.355 }' 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:01.355 [2024-07-26 05:18:20.306631] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:01.355 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@657 -- # local timeout=407 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.355 05:18:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.614 [2024-07-26 05:18:20.530175] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:01.614 05:18:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:01.614 "name": "raid_bdev1", 00:21:01.614 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:21:01.614 "strip_size_kb": 0, 00:21:01.614 "state": "online", 00:21:01.614 "raid_level": "raid1", 00:21:01.614 "superblock": true, 00:21:01.614 "num_base_bdevs": 2, 00:21:01.614 "num_base_bdevs_discovered": 2, 00:21:01.614 "num_base_bdevs_operational": 2, 00:21:01.614 "process": { 00:21:01.614 "type": "rebuild", 00:21:01.614 "target": "spare", 00:21:01.614 "progress": { 00:21:01.614 "blocks": 18432, 00:21:01.614 "percent": 29 00:21:01.614 } 00:21:01.614 }, 00:21:01.614 "base_bdevs_list": [ 00:21:01.614 { 00:21:01.614 "name": "spare", 00:21:01.614 "uuid": "a2db295e-de54-574d-9993-a0084ba39dcf", 00:21:01.614 "is_configured": true, 00:21:01.614 "data_offset": 2048, 00:21:01.614 "data_size": 63488 00:21:01.614 }, 00:21:01.614 { 00:21:01.614 "name": "BaseBdev2", 00:21:01.614 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:21:01.614 "is_configured": true, 00:21:01.614 "data_offset": 2048, 00:21:01.614 "data_size": 63488 00:21:01.614 } 00:21:01.614 ] 00:21:01.614 }' 00:21:01.614 05:18:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:01.614 05:18:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:01.614 05:18:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:01.614 05:18:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.614 05:18:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:01.614 [2024-07-26 05:18:20.652416] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:02.182 [2024-07-26 05:18:21.000991] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:02.182 [2024-07-26 05:18:21.221205] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:02.441 [2024-07-26 05:18:21.537700] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:02.701 05:18:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:02.701 05:18:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.701 05:18:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:02.701 05:18:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:02.701 05:18:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:02.701 05:18:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:02.701 05:18:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.701 05:18:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.701 [2024-07-26 05:18:21.638924] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:02.960 [2024-07-26 05:18:21.848808] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:02.960 [2024-07-26 05:18:21.849223] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:02.960 05:18:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:02.960 "name": "raid_bdev1", 00:21:02.960 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:21:02.960 "strip_size_kb": 0, 00:21:02.960 "state": "online", 00:21:02.960 "raid_level": "raid1", 00:21:02.960 "superblock": true, 00:21:02.960 "num_base_bdevs": 2, 00:21:02.960 "num_base_bdevs_discovered": 2, 00:21:02.960 "num_base_bdevs_operational": 2, 00:21:02.960 "process": { 00:21:02.960 "type": "rebuild", 00:21:02.960 "target": "spare", 00:21:02.960 "progress": { 00:21:02.960 "blocks": 38912, 00:21:02.960 "percent": 61 00:21:02.960 } 00:21:02.960 }, 00:21:02.960 "base_bdevs_list": [ 00:21:02.960 { 00:21:02.960 "name": "spare", 00:21:02.960 "uuid": "a2db295e-de54-574d-9993-a0084ba39dcf", 00:21:02.960 "is_configured": true, 00:21:02.960 "data_offset": 2048, 00:21:02.960 "data_size": 63488 00:21:02.960 }, 00:21:02.960 { 00:21:02.960 "name": "BaseBdev2", 00:21:02.960 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:21:02.960 "is_configured": true, 00:21:02.960 "data_offset": 2048, 00:21:02.960 "data_size": 63488 00:21:02.960 } 00:21:02.960 ] 00:21:02.960 }' 00:21:02.960 05:18:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:02.960 05:18:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.960 05:18:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:02.960 05:18:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.960 05:18:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:03.528 [2024-07-26 05:18:22.612714] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:04.097 05:18:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:04.097 05:18:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.097 05:18:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:04.097 05:18:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:04.097 05:18:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:04.097 05:18:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:04.097 05:18:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.097 05:18:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.097 05:18:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:04.097 "name": "raid_bdev1", 00:21:04.097 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:21:04.097 "strip_size_kb": 0, 00:21:04.097 "state": "online", 00:21:04.097 "raid_level": "raid1", 00:21:04.097 "superblock": true, 00:21:04.097 "num_base_bdevs": 2, 00:21:04.097 "num_base_bdevs_discovered": 2, 00:21:04.097 "num_base_bdevs_operational": 2, 00:21:04.097 "process": { 00:21:04.097 "type": "rebuild", 00:21:04.097 "target": "spare", 00:21:04.097 "progress": { 00:21:04.097 "blocks": 61440, 00:21:04.097 "percent": 96 00:21:04.097 } 00:21:04.097 }, 00:21:04.097 "base_bdevs_list": [ 00:21:04.097 { 00:21:04.097 "name": "spare", 00:21:04.097 "uuid": "a2db295e-de54-574d-9993-a0084ba39dcf", 00:21:04.097 "is_configured": true, 00:21:04.097 "data_offset": 2048, 00:21:04.097 "data_size": 63488 00:21:04.097 }, 00:21:04.097 { 00:21:04.097 "name": "BaseBdev2", 00:21:04.097 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:21:04.097 "is_configured": true, 00:21:04.097 "data_offset": 2048, 00:21:04.097 "data_size": 63488 00:21:04.097 } 00:21:04.097 ] 00:21:04.097 }' 00:21:04.097 05:18:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:04.097 05:18:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:04.097 05:18:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:04.097 [2024-07-26 05:18:23.160793] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:04.097 05:18:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.097 05:18:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:04.356 [2024-07-26 05:18:23.266557] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:04.356 [2024-07-26 05:18:23.268484] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.292 05:18:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:05.292 05:18:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.292 05:18:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.292 05:18:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:05.292 05:18:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:05.292 05:18:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.292 05:18:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.292 05:18:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:05.551 "name": "raid_bdev1", 00:21:05.551 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:21:05.551 "strip_size_kb": 0, 00:21:05.551 "state": "online", 00:21:05.551 "raid_level": "raid1", 00:21:05.551 "superblock": true, 00:21:05.551 "num_base_bdevs": 2, 00:21:05.551 "num_base_bdevs_discovered": 2, 00:21:05.551 "num_base_bdevs_operational": 2, 00:21:05.551 "base_bdevs_list": [ 00:21:05.551 { 00:21:05.551 "name": "spare", 00:21:05.551 "uuid": "a2db295e-de54-574d-9993-a0084ba39dcf", 00:21:05.551 "is_configured": true, 00:21:05.551 "data_offset": 2048, 00:21:05.551 "data_size": 63488 00:21:05.551 }, 00:21:05.551 { 00:21:05.551 "name": "BaseBdev2", 00:21:05.551 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:21:05.551 "is_configured": true, 00:21:05.551 "data_offset": 2048, 00:21:05.551 "data_size": 63488 00:21:05.551 } 00:21:05.551 ] 00:21:05.551 }' 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@660 -- # break 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.551 05:18:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.810 05:18:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:05.810 "name": "raid_bdev1", 00:21:05.810 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:21:05.810 "strip_size_kb": 0, 00:21:05.810 "state": "online", 00:21:05.810 "raid_level": "raid1", 00:21:05.810 "superblock": true, 00:21:05.810 "num_base_bdevs": 2, 00:21:05.810 "num_base_bdevs_discovered": 2, 00:21:05.810 "num_base_bdevs_operational": 2, 00:21:05.810 "base_bdevs_list": [ 00:21:05.810 { 00:21:05.810 "name": "spare", 00:21:05.810 "uuid": "a2db295e-de54-574d-9993-a0084ba39dcf", 00:21:05.811 "is_configured": true, 00:21:05.811 "data_offset": 2048, 00:21:05.811 "data_size": 63488 00:21:05.811 }, 00:21:05.811 { 00:21:05.811 "name": "BaseBdev2", 00:21:05.811 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:21:05.811 "is_configured": true, 00:21:05.811 "data_offset": 2048, 00:21:05.811 "data_size": 63488 00:21:05.811 } 00:21:05.811 ] 00:21:05.811 }' 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:05.811 "name": "raid_bdev1", 00:21:05.811 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:21:05.811 "strip_size_kb": 0, 00:21:05.811 "state": "online", 00:21:05.811 "raid_level": "raid1", 00:21:05.811 "superblock": true, 00:21:05.811 "num_base_bdevs": 2, 00:21:05.811 "num_base_bdevs_discovered": 2, 00:21:05.811 "num_base_bdevs_operational": 2, 00:21:05.811 "base_bdevs_list": [ 00:21:05.811 { 00:21:05.811 "name": "spare", 00:21:05.811 "uuid": "a2db295e-de54-574d-9993-a0084ba39dcf", 00:21:05.811 "is_configured": true, 00:21:05.811 "data_offset": 2048, 00:21:05.811 "data_size": 63488 00:21:05.811 }, 00:21:05.811 { 00:21:05.811 "name": "BaseBdev2", 00:21:05.811 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:21:05.811 "is_configured": true, 00:21:05.811 "data_offset": 2048, 00:21:05.811 "data_size": 63488 00:21:05.811 } 00:21:05.811 ] 00:21:05.811 }' 00:21:05.811 05:18:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:05.811 05:18:24 -- common/autotest_common.sh@10 -- # set +x 00:21:06.379 05:18:25 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:06.379 [2024-07-26 05:18:25.397033] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:06.379 [2024-07-26 05:18:25.397071] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:06.379 00:21:06.379 Latency(us) 00:21:06.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.379 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:06.379 raid_bdev1 : 9.86 97.89 293.68 0.00 0.00 13227.81 260.65 108193.98 00:21:06.379 =================================================================================================================== 00:21:06.379 Total : 97.89 293.68 0.00 0.00 13227.81 260.65 108193.98 00:21:06.379 [2024-07-26 05:18:25.436150] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.379 [2024-07-26 05:18:25.436331] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.379 [2024-07-26 05:18:25.436457] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr0 00:21:06.379 ee all in destruct 00:21:06.379 [2024-07-26 05:18:25.436657] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:21:06.379 05:18:25 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.379 05:18:25 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:06.638 05:18:25 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:06.638 05:18:25 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:06.638 05:18:25 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:06.638 05:18:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:06.638 05:18:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:06.638 05:18:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:06.638 05:18:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:06.638 05:18:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:06.638 05:18:25 -- bdev/nbd_common.sh@12 -- # local i 00:21:06.638 05:18:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:06.638 05:18:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.638 05:18:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:06.897 /dev/nbd0 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:06.897 05:18:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:06.897 05:18:25 -- common/autotest_common.sh@857 -- # local i 00:21:06.897 05:18:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:06.897 05:18:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:06.897 05:18:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:06.897 05:18:25 -- common/autotest_common.sh@861 -- # break 00:21:06.897 05:18:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:06.897 05:18:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:06.897 05:18:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:06.897 1+0 records in 00:21:06.897 1+0 records out 00:21:06.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276396 s, 14.8 MB/s 00:21:06.897 05:18:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:06.897 05:18:25 -- common/autotest_common.sh@874 -- # size=4096 00:21:06.897 05:18:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:06.897 05:18:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:06.897 05:18:25 -- common/autotest_common.sh@877 -- # return 0 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.897 05:18:25 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:06.897 05:18:25 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:06.897 05:18:25 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@12 -- # local i 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.897 05:18:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:07.157 /dev/nbd1 00:21:07.157 05:18:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:07.157 05:18:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:07.157 05:18:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:07.157 05:18:26 -- common/autotest_common.sh@857 -- # local i 00:21:07.157 05:18:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:07.157 05:18:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:07.157 05:18:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:07.157 05:18:26 -- common/autotest_common.sh@861 -- # break 00:21:07.157 05:18:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:07.157 05:18:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:07.157 05:18:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:07.157 1+0 records in 00:21:07.157 1+0 records out 00:21:07.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277629 s, 14.8 MB/s 00:21:07.157 05:18:26 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.157 05:18:26 -- common/autotest_common.sh@874 -- # size=4096 00:21:07.157 05:18:26 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.157 05:18:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:07.157 05:18:26 -- common/autotest_common.sh@877 -- # return 0 00:21:07.157 05:18:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:07.157 05:18:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:07.157 05:18:26 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:07.416 05:18:26 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@51 -- # local i 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@41 -- # break 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@45 -- # return 0 00:21:07.416 05:18:26 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@51 -- # local i 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.416 05:18:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:07.675 05:18:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:07.675 05:18:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:07.675 05:18:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:07.675 05:18:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:07.675 05:18:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:07.675 05:18:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:07.675 05:18:26 -- bdev/nbd_common.sh@41 -- # break 00:21:07.675 05:18:26 -- bdev/nbd_common.sh@45 -- # return 0 00:21:07.675 05:18:26 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:07.675 05:18:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:07.675 05:18:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:07.675 05:18:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:07.934 05:18:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:08.193 [2024-07-26 05:18:27.102352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:08.193 [2024-07-26 05:18:27.102421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.193 [2024-07-26 05:18:27.102475] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:21:08.193 [2024-07-26 05:18:27.102508] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.193 [2024-07-26 05:18:27.104784] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.193 [2024-07-26 05:18:27.104828] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:08.193 [2024-07-26 05:18:27.104920] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:08.193 [2024-07-26 05:18:27.104972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:08.193 BaseBdev1 00:21:08.193 05:18:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:08.193 05:18:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:08.193 05:18:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:08.451 05:18:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:08.710 [2024-07-26 05:18:27.598567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:08.710 [2024-07-26 05:18:27.598857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.710 [2024-07-26 05:18:27.598915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:21:08.710 [2024-07-26 05:18:27.598934] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.710 [2024-07-26 05:18:27.599549] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.710 [2024-07-26 05:18:27.599600] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:08.710 [2024-07-26 05:18:27.599693] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:08.710 [2024-07-26 05:18:27.599714] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:08.710 [2024-07-26 05:18:27.599724] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.710 [2024-07-26 05:18:27.599757] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state configuring 00:21:08.710 [2024-07-26 05:18:27.599817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:08.710 BaseBdev2 00:21:08.710 05:18:27 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:08.710 05:18:27 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:08.969 [2024-07-26 05:18:27.958707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:08.969 [2024-07-26 05:18:27.958913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.969 [2024-07-26 05:18:27.958956] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:21:08.970 [2024-07-26 05:18:27.958970] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.970 [2024-07-26 05:18:27.959564] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.970 [2024-07-26 05:18:27.959591] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:08.970 [2024-07-26 05:18:27.959735] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:08.970 [2024-07-26 05:18:27.959763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:08.970 spare 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.970 05:18:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.970 [2024-07-26 05:18:28.059886] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:21:08.970 [2024-07-26 05:18:28.060065] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:08.970 [2024-07-26 05:18:28.060240] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a7e0 00:21:08.970 [2024-07-26 05:18:28.060689] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:21:08.970 [2024-07-26 05:18:28.060722] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:21:08.970 [2024-07-26 05:18:28.060871] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.229 05:18:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:09.229 "name": "raid_bdev1", 00:21:09.229 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:21:09.229 "strip_size_kb": 0, 00:21:09.229 "state": "online", 00:21:09.229 "raid_level": "raid1", 00:21:09.229 "superblock": true, 00:21:09.229 "num_base_bdevs": 2, 00:21:09.229 "num_base_bdevs_discovered": 2, 00:21:09.229 "num_base_bdevs_operational": 2, 00:21:09.229 "base_bdevs_list": [ 00:21:09.229 { 00:21:09.229 "name": "spare", 00:21:09.229 "uuid": "a2db295e-de54-574d-9993-a0084ba39dcf", 00:21:09.229 "is_configured": true, 00:21:09.229 "data_offset": 2048, 00:21:09.229 "data_size": 63488 00:21:09.229 }, 00:21:09.229 { 00:21:09.229 "name": "BaseBdev2", 00:21:09.229 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:21:09.229 "is_configured": true, 00:21:09.229 "data_offset": 2048, 00:21:09.229 "data_size": 63488 00:21:09.229 } 00:21:09.229 ] 00:21:09.229 }' 00:21:09.229 05:18:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:09.229 05:18:28 -- common/autotest_common.sh@10 -- # set +x 00:21:09.488 05:18:28 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:09.488 05:18:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:09.488 05:18:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:09.488 05:18:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:09.488 05:18:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:09.488 05:18:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.488 05:18:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.747 05:18:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:09.747 "name": "raid_bdev1", 00:21:09.747 "uuid": "a3bd4e0d-cf5b-4e8f-ab33-5d3073ffa9dd", 00:21:09.747 "strip_size_kb": 0, 00:21:09.747 "state": "online", 00:21:09.747 "raid_level": "raid1", 00:21:09.747 "superblock": true, 00:21:09.747 "num_base_bdevs": 2, 00:21:09.747 "num_base_bdevs_discovered": 2, 00:21:09.747 "num_base_bdevs_operational": 2, 00:21:09.747 "base_bdevs_list": [ 00:21:09.747 { 00:21:09.747 "name": "spare", 00:21:09.747 "uuid": "a2db295e-de54-574d-9993-a0084ba39dcf", 00:21:09.747 "is_configured": true, 00:21:09.747 "data_offset": 2048, 00:21:09.747 "data_size": 63488 00:21:09.747 }, 00:21:09.747 { 00:21:09.747 "name": "BaseBdev2", 00:21:09.747 "uuid": "36dbe769-c7b6-5280-96e0-55048fea8aa3", 00:21:09.747 "is_configured": true, 00:21:09.747 "data_offset": 2048, 00:21:09.747 "data_size": 63488 00:21:09.747 } 00:21:09.747 ] 00:21:09.747 }' 00:21:09.747 05:18:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:09.747 05:18:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:09.747 05:18:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:09.747 05:18:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:09.747 05:18:28 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.747 05:18:28 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:10.006 05:18:28 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.006 05:18:28 -- bdev/bdev_raid.sh@709 -- # killprocess 79627 00:21:10.006 05:18:28 -- common/autotest_common.sh@926 -- # '[' -z 79627 ']' 00:21:10.006 05:18:28 -- common/autotest_common.sh@930 -- # kill -0 79627 00:21:10.006 05:18:28 -- common/autotest_common.sh@931 -- # uname 00:21:10.006 05:18:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:10.006 05:18:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79627 00:21:10.007 killing process with pid 79627 00:21:10.007 Received shutdown signal, test time was about 13.461091 seconds 00:21:10.007 00:21:10.007 Latency(us) 00:21:10.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.007 =================================================================================================================== 00:21:10.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.007 05:18:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:10.007 05:18:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:10.007 05:18:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79627' 00:21:10.007 05:18:29 -- common/autotest_common.sh@945 -- # kill 79627 00:21:10.007 05:18:29 -- common/autotest_common.sh@950 -- # wait 79627 00:21:10.007 [2024-07-26 05:18:29.024368] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:10.007 [2024-07-26 05:18:29.024448] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:10.007 [2024-07-26 05:18:29.024572] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:10.007 [2024-07-26 05:18:29.024593] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:21:10.266 [2024-07-26 05:18:29.175118] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:11.217 00:21:11.217 real 0m18.268s 00:21:11.217 user 0m27.179s 00:21:11.217 sys 0m2.170s 00:21:11.217 ************************************ 00:21:11.217 END TEST raid_rebuild_test_sb_io 00:21:11.217 ************************************ 00:21:11.217 05:18:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:11.217 05:18:30 -- common/autotest_common.sh@10 -- # set +x 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:21:11.217 05:18:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:11.217 05:18:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:11.217 05:18:30 -- common/autotest_common.sh@10 -- # set +x 00:21:11.217 ************************************ 00:21:11.217 START TEST raid_rebuild_test 00:21:11.217 ************************************ 00:21:11.217 05:18:30 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:11.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@544 -- # raid_pid=80135 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@545 -- # waitforlisten 80135 /var/tmp/spdk-raid.sock 00:21:11.217 05:18:30 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:11.217 05:18:30 -- common/autotest_common.sh@819 -- # '[' -z 80135 ']' 00:21:11.217 05:18:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:11.217 05:18:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:11.217 05:18:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:11.217 05:18:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:11.217 05:18:30 -- common/autotest_common.sh@10 -- # set +x 00:21:11.217 [2024-07-26 05:18:30.245620] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:11.217 [2024-07-26 05:18:30.245952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefI/O size of 3145728 is greater than zero copy threshold (65536). 00:21:11.217 Zero copy mechanism will not be used. 00:21:11.217 ix=spdk_pid80135 ] 00:21:11.513 [2024-07-26 05:18:30.396900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.513 [2024-07-26 05:18:30.548582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.772 [2024-07-26 05:18:30.691341] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:12.337 05:18:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:12.337 05:18:31 -- common/autotest_common.sh@852 -- # return 0 00:21:12.337 05:18:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:12.337 05:18:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:12.337 05:18:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:12.337 BaseBdev1 00:21:12.595 05:18:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:12.595 05:18:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:12.595 05:18:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:12.595 BaseBdev2 00:21:12.595 05:18:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:12.595 05:18:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:12.595 05:18:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:12.853 BaseBdev3 00:21:12.853 05:18:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:12.853 05:18:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:12.853 05:18:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:13.112 BaseBdev4 00:21:13.112 05:18:32 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:13.371 spare_malloc 00:21:13.371 05:18:32 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:13.371 spare_delay 00:21:13.371 05:18:32 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:13.630 [2024-07-26 05:18:32.633048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:13.630 [2024-07-26 05:18:32.633123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.630 [2024-07-26 05:18:32.633150] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:21:13.630 [2024-07-26 05:18:32.633166] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.630 [2024-07-26 05:18:32.635513] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.630 [2024-07-26 05:18:32.635695] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:13.630 spare 00:21:13.630 05:18:32 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:13.889 [2024-07-26 05:18:32.809141] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:13.889 [2024-07-26 05:18:32.811147] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.889 [2024-07-26 05:18:32.811200] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:13.889 [2024-07-26 05:18:32.811249] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:13.889 [2024-07-26 05:18:32.811316] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:21:13.889 [2024-07-26 05:18:32.811333] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:13.889 [2024-07-26 05:18:32.811443] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:21:13.889 [2024-07-26 05:18:32.811759] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:21:13.889 [2024-07-26 05:18:32.811774] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:21:13.889 [2024-07-26 05:18:32.811919] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.889 05:18:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.148 05:18:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:14.148 "name": "raid_bdev1", 00:21:14.148 "uuid": "4fafad1d-3bc0-4d98-bb87-7d62bbf4663b", 00:21:14.148 "strip_size_kb": 0, 00:21:14.148 "state": "online", 00:21:14.148 "raid_level": "raid1", 00:21:14.148 "superblock": false, 00:21:14.148 "num_base_bdevs": 4, 00:21:14.148 "num_base_bdevs_discovered": 4, 00:21:14.148 "num_base_bdevs_operational": 4, 00:21:14.148 "base_bdevs_list": [ 00:21:14.148 { 00:21:14.148 "name": "BaseBdev1", 00:21:14.148 "uuid": "2d20afae-badd-4aef-b5bd-db33820f388e", 00:21:14.148 "is_configured": true, 00:21:14.148 "data_offset": 0, 00:21:14.148 "data_size": 65536 00:21:14.148 }, 00:21:14.148 { 00:21:14.148 "name": "BaseBdev2", 00:21:14.148 "uuid": "23e5f408-4f6a-4ed3-a06f-f33c24fb47b4", 00:21:14.148 "is_configured": true, 00:21:14.148 "data_offset": 0, 00:21:14.148 "data_size": 65536 00:21:14.148 }, 00:21:14.148 { 00:21:14.148 "name": "BaseBdev3", 00:21:14.148 "uuid": "11d7f601-9476-4d6e-bbae-1aeebb918057", 00:21:14.148 "is_configured": true, 00:21:14.148 "data_offset": 0, 00:21:14.148 "data_size": 65536 00:21:14.148 }, 00:21:14.148 { 00:21:14.148 "name": "BaseBdev4", 00:21:14.148 "uuid": "c280cf61-6f54-47c7-80b7-77b0f5d7a23c", 00:21:14.148 "is_configured": true, 00:21:14.148 "data_offset": 0, 00:21:14.148 "data_size": 65536 00:21:14.148 } 00:21:14.148 ] 00:21:14.148 }' 00:21:14.148 05:18:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:14.148 05:18:33 -- common/autotest_common.sh@10 -- # set +x 00:21:14.406 05:18:33 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:14.406 05:18:33 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:14.664 [2024-07-26 05:18:33.589577] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.664 05:18:33 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:14.664 05:18:33 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:14.664 05:18:33 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.922 05:18:33 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:14.922 05:18:33 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:14.922 05:18:33 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:14.922 05:18:33 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:14.922 05:18:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:14.922 05:18:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:14.922 05:18:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:14.922 05:18:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:14.922 05:18:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:14.922 05:18:33 -- bdev/nbd_common.sh@12 -- # local i 00:21:14.922 05:18:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:14.922 05:18:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:14.922 05:18:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:14.922 [2024-07-26 05:18:33.957467] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:21:14.922 /dev/nbd0 00:21:14.922 05:18:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:14.922 05:18:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:14.922 05:18:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:14.922 05:18:33 -- common/autotest_common.sh@857 -- # local i 00:21:14.922 05:18:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:14.922 05:18:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:14.922 05:18:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:14.922 05:18:33 -- common/autotest_common.sh@861 -- # break 00:21:14.922 05:18:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:14.922 05:18:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:14.922 05:18:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:14.922 1+0 records in 00:21:14.922 1+0 records out 00:21:14.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233316 s, 17.6 MB/s 00:21:14.922 05:18:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.922 05:18:34 -- common/autotest_common.sh@874 -- # size=4096 00:21:14.922 05:18:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.922 05:18:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:14.922 05:18:34 -- common/autotest_common.sh@877 -- # return 0 00:21:14.922 05:18:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:14.922 05:18:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:14.922 05:18:34 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:14.923 05:18:34 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:14.923 05:18:34 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:21.485 65536+0 records in 00:21:21.485 65536+0 records out 00:21:21.485 33554432 bytes (34 MB, 32 MiB) copied, 5.99125 s, 5.6 MB/s 00:21:21.485 05:18:40 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@51 -- # local i 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:21.485 [2024-07-26 05:18:40.208037] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:21.485 05:18:40 -- bdev/nbd_common.sh@41 -- # break 00:21:21.486 05:18:40 -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:21.486 [2024-07-26 05:18:40.408721] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.486 05:18:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.745 05:18:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:21.745 "name": "raid_bdev1", 00:21:21.745 "uuid": "4fafad1d-3bc0-4d98-bb87-7d62bbf4663b", 00:21:21.745 "strip_size_kb": 0, 00:21:21.745 "state": "online", 00:21:21.745 "raid_level": "raid1", 00:21:21.745 "superblock": false, 00:21:21.745 "num_base_bdevs": 4, 00:21:21.745 "num_base_bdevs_discovered": 3, 00:21:21.745 "num_base_bdevs_operational": 3, 00:21:21.745 "base_bdevs_list": [ 00:21:21.745 { 00:21:21.745 "name": null, 00:21:21.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.745 "is_configured": false, 00:21:21.745 "data_offset": 0, 00:21:21.745 "data_size": 65536 00:21:21.745 }, 00:21:21.745 { 00:21:21.745 "name": "BaseBdev2", 00:21:21.745 "uuid": "23e5f408-4f6a-4ed3-a06f-f33c24fb47b4", 00:21:21.745 "is_configured": true, 00:21:21.745 "data_offset": 0, 00:21:21.745 "data_size": 65536 00:21:21.745 }, 00:21:21.745 { 00:21:21.745 "name": "BaseBdev3", 00:21:21.745 "uuid": "11d7f601-9476-4d6e-bbae-1aeebb918057", 00:21:21.745 "is_configured": true, 00:21:21.745 "data_offset": 0, 00:21:21.745 "data_size": 65536 00:21:21.745 }, 00:21:21.745 { 00:21:21.745 "name": "BaseBdev4", 00:21:21.745 "uuid": "c280cf61-6f54-47c7-80b7-77b0f5d7a23c", 00:21:21.745 "is_configured": true, 00:21:21.745 "data_offset": 0, 00:21:21.745 "data_size": 65536 00:21:21.745 } 00:21:21.745 ] 00:21:21.745 }' 00:21:21.745 05:18:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:21.745 05:18:40 -- common/autotest_common.sh@10 -- # set +x 00:21:22.004 05:18:40 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:22.263 [2024-07-26 05:18:41.188882] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:22.263 [2024-07-26 05:18:41.188933] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:22.263 [2024-07-26 05:18:41.199225] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09620 00:21:22.263 [2024-07-26 05:18:41.201045] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:22.263 05:18:41 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:23.205 05:18:42 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.205 05:18:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:23.205 05:18:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:23.205 05:18:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:23.205 05:18:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:23.205 05:18:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.205 05:18:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.464 05:18:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:23.464 "name": "raid_bdev1", 00:21:23.465 "uuid": "4fafad1d-3bc0-4d98-bb87-7d62bbf4663b", 00:21:23.465 "strip_size_kb": 0, 00:21:23.465 "state": "online", 00:21:23.465 "raid_level": "raid1", 00:21:23.465 "superblock": false, 00:21:23.465 "num_base_bdevs": 4, 00:21:23.465 "num_base_bdevs_discovered": 4, 00:21:23.465 "num_base_bdevs_operational": 4, 00:21:23.465 "process": { 00:21:23.465 "type": "rebuild", 00:21:23.465 "target": "spare", 00:21:23.465 "progress": { 00:21:23.465 "blocks": 24576, 00:21:23.465 "percent": 37 00:21:23.465 } 00:21:23.465 }, 00:21:23.465 "base_bdevs_list": [ 00:21:23.465 { 00:21:23.465 "name": "spare", 00:21:23.465 "uuid": "48e44ebd-2d69-500c-931e-d69dca2c5a36", 00:21:23.465 "is_configured": true, 00:21:23.465 "data_offset": 0, 00:21:23.465 "data_size": 65536 00:21:23.465 }, 00:21:23.465 { 00:21:23.465 "name": "BaseBdev2", 00:21:23.465 "uuid": "23e5f408-4f6a-4ed3-a06f-f33c24fb47b4", 00:21:23.465 "is_configured": true, 00:21:23.465 "data_offset": 0, 00:21:23.465 "data_size": 65536 00:21:23.465 }, 00:21:23.465 { 00:21:23.465 "name": "BaseBdev3", 00:21:23.465 "uuid": "11d7f601-9476-4d6e-bbae-1aeebb918057", 00:21:23.465 "is_configured": true, 00:21:23.465 "data_offset": 0, 00:21:23.465 "data_size": 65536 00:21:23.465 }, 00:21:23.465 { 00:21:23.465 "name": "BaseBdev4", 00:21:23.465 "uuid": "c280cf61-6f54-47c7-80b7-77b0f5d7a23c", 00:21:23.465 "is_configured": true, 00:21:23.465 "data_offset": 0, 00:21:23.465 "data_size": 65536 00:21:23.465 } 00:21:23.465 ] 00:21:23.465 }' 00:21:23.465 05:18:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:23.465 05:18:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.465 05:18:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:23.465 05:18:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.465 05:18:42 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:23.724 [2024-07-26 05:18:42.651350] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.724 [2024-07-26 05:18:42.707721] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:23.724 [2024-07-26 05:18:42.707805] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.724 05:18:42 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:23.724 05:18:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:23.724 05:18:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:23.724 05:18:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:23.724 05:18:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:23.724 05:18:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:23.724 05:18:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:23.724 05:18:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:23.724 05:18:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:23.725 05:18:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:23.725 05:18:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.725 05:18:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.984 05:18:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:23.984 "name": "raid_bdev1", 00:21:23.984 "uuid": "4fafad1d-3bc0-4d98-bb87-7d62bbf4663b", 00:21:23.984 "strip_size_kb": 0, 00:21:23.984 "state": "online", 00:21:23.984 "raid_level": "raid1", 00:21:23.984 "superblock": false, 00:21:23.984 "num_base_bdevs": 4, 00:21:23.984 "num_base_bdevs_discovered": 3, 00:21:23.984 "num_base_bdevs_operational": 3, 00:21:23.984 "base_bdevs_list": [ 00:21:23.984 { 00:21:23.984 "name": null, 00:21:23.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.984 "is_configured": false, 00:21:23.984 "data_offset": 0, 00:21:23.984 "data_size": 65536 00:21:23.984 }, 00:21:23.984 { 00:21:23.984 "name": "BaseBdev2", 00:21:23.984 "uuid": "23e5f408-4f6a-4ed3-a06f-f33c24fb47b4", 00:21:23.984 "is_configured": true, 00:21:23.984 "data_offset": 0, 00:21:23.984 "data_size": 65536 00:21:23.984 }, 00:21:23.984 { 00:21:23.984 "name": "BaseBdev3", 00:21:23.984 "uuid": "11d7f601-9476-4d6e-bbae-1aeebb918057", 00:21:23.984 "is_configured": true, 00:21:23.984 "data_offset": 0, 00:21:23.984 "data_size": 65536 00:21:23.984 }, 00:21:23.984 { 00:21:23.984 "name": "BaseBdev4", 00:21:23.984 "uuid": "c280cf61-6f54-47c7-80b7-77b0f5d7a23c", 00:21:23.984 "is_configured": true, 00:21:23.984 "data_offset": 0, 00:21:23.984 "data_size": 65536 00:21:23.984 } 00:21:23.984 ] 00:21:23.984 }' 00:21:23.984 05:18:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:23.984 05:18:42 -- common/autotest_common.sh@10 -- # set +x 00:21:24.243 05:18:43 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:24.243 05:18:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.243 05:18:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:24.243 05:18:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:24.243 05:18:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.243 05:18:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.243 05:18:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.502 05:18:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.502 "name": "raid_bdev1", 00:21:24.502 "uuid": "4fafad1d-3bc0-4d98-bb87-7d62bbf4663b", 00:21:24.502 "strip_size_kb": 0, 00:21:24.502 "state": "online", 00:21:24.502 "raid_level": "raid1", 00:21:24.502 "superblock": false, 00:21:24.502 "num_base_bdevs": 4, 00:21:24.502 "num_base_bdevs_discovered": 3, 00:21:24.502 "num_base_bdevs_operational": 3, 00:21:24.502 "base_bdevs_list": [ 00:21:24.502 { 00:21:24.502 "name": null, 00:21:24.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.502 "is_configured": false, 00:21:24.502 "data_offset": 0, 00:21:24.502 "data_size": 65536 00:21:24.502 }, 00:21:24.502 { 00:21:24.502 "name": "BaseBdev2", 00:21:24.502 "uuid": "23e5f408-4f6a-4ed3-a06f-f33c24fb47b4", 00:21:24.502 "is_configured": true, 00:21:24.502 "data_offset": 0, 00:21:24.502 "data_size": 65536 00:21:24.502 }, 00:21:24.502 { 00:21:24.502 "name": "BaseBdev3", 00:21:24.502 "uuid": "11d7f601-9476-4d6e-bbae-1aeebb918057", 00:21:24.502 "is_configured": true, 00:21:24.502 "data_offset": 0, 00:21:24.502 "data_size": 65536 00:21:24.502 }, 00:21:24.502 { 00:21:24.502 "name": "BaseBdev4", 00:21:24.502 "uuid": "c280cf61-6f54-47c7-80b7-77b0f5d7a23c", 00:21:24.502 "is_configured": true, 00:21:24.502 "data_offset": 0, 00:21:24.502 "data_size": 65536 00:21:24.502 } 00:21:24.502 ] 00:21:24.502 }' 00:21:24.502 05:18:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.502 05:18:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:24.502 05:18:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.502 05:18:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:24.502 05:18:43 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:24.761 [2024-07-26 05:18:43.650423] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:24.761 [2024-07-26 05:18:43.650643] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.761 [2024-07-26 05:18:43.660484] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d096f0 00:21:24.761 [2024-07-26 05:18:43.662602] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:24.761 05:18:43 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:25.698 05:18:44 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.698 05:18:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:25.698 05:18:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:25.699 05:18:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:25.699 05:18:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:25.699 05:18:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.699 05:18:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.958 05:18:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:25.958 "name": "raid_bdev1", 00:21:25.958 "uuid": "4fafad1d-3bc0-4d98-bb87-7d62bbf4663b", 00:21:25.958 "strip_size_kb": 0, 00:21:25.958 "state": "online", 00:21:25.958 "raid_level": "raid1", 00:21:25.958 "superblock": false, 00:21:25.958 "num_base_bdevs": 4, 00:21:25.958 "num_base_bdevs_discovered": 4, 00:21:25.958 "num_base_bdevs_operational": 4, 00:21:25.958 "process": { 00:21:25.958 "type": "rebuild", 00:21:25.958 "target": "spare", 00:21:25.958 "progress": { 00:21:25.958 "blocks": 22528, 00:21:25.958 "percent": 34 00:21:25.958 } 00:21:25.958 }, 00:21:25.958 "base_bdevs_list": [ 00:21:25.958 { 00:21:25.958 "name": "spare", 00:21:25.958 "uuid": "48e44ebd-2d69-500c-931e-d69dca2c5a36", 00:21:25.958 "is_configured": true, 00:21:25.958 "data_offset": 0, 00:21:25.958 "data_size": 65536 00:21:25.958 }, 00:21:25.958 { 00:21:25.958 "name": "BaseBdev2", 00:21:25.958 "uuid": "23e5f408-4f6a-4ed3-a06f-f33c24fb47b4", 00:21:25.958 "is_configured": true, 00:21:25.958 "data_offset": 0, 00:21:25.958 "data_size": 65536 00:21:25.958 }, 00:21:25.958 { 00:21:25.958 "name": "BaseBdev3", 00:21:25.958 "uuid": "11d7f601-9476-4d6e-bbae-1aeebb918057", 00:21:25.958 "is_configured": true, 00:21:25.958 "data_offset": 0, 00:21:25.958 "data_size": 65536 00:21:25.958 }, 00:21:25.958 { 00:21:25.958 "name": "BaseBdev4", 00:21:25.958 "uuid": "c280cf61-6f54-47c7-80b7-77b0f5d7a23c", 00:21:25.958 "is_configured": true, 00:21:25.958 "data_offset": 0, 00:21:25.958 "data_size": 65536 00:21:25.958 } 00:21:25.958 ] 00:21:25.958 }' 00:21:25.958 05:18:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:25.958 05:18:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.958 05:18:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:25.958 05:18:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.958 05:18:44 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:25.958 05:18:44 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:25.958 05:18:44 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:25.958 05:18:44 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:25.958 05:18:44 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:26.217 [2024-07-26 05:18:45.092883] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:26.217 [2024-07-26 05:18:45.169325] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000d096f0 00:21:26.217 05:18:45 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:26.217 05:18:45 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:26.217 05:18:45 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.217 05:18:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.217 05:18:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:26.218 05:18:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:26.218 05:18:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.218 05:18:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.218 05:18:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.477 "name": "raid_bdev1", 00:21:26.477 "uuid": "4fafad1d-3bc0-4d98-bb87-7d62bbf4663b", 00:21:26.477 "strip_size_kb": 0, 00:21:26.477 "state": "online", 00:21:26.477 "raid_level": "raid1", 00:21:26.477 "superblock": false, 00:21:26.477 "num_base_bdevs": 4, 00:21:26.477 "num_base_bdevs_discovered": 3, 00:21:26.477 "num_base_bdevs_operational": 3, 00:21:26.477 "process": { 00:21:26.477 "type": "rebuild", 00:21:26.477 "target": "spare", 00:21:26.477 "progress": { 00:21:26.477 "blocks": 34816, 00:21:26.477 "percent": 53 00:21:26.477 } 00:21:26.477 }, 00:21:26.477 "base_bdevs_list": [ 00:21:26.477 { 00:21:26.477 "name": "spare", 00:21:26.477 "uuid": "48e44ebd-2d69-500c-931e-d69dca2c5a36", 00:21:26.477 "is_configured": true, 00:21:26.477 "data_offset": 0, 00:21:26.477 "data_size": 65536 00:21:26.477 }, 00:21:26.477 { 00:21:26.477 "name": null, 00:21:26.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.477 "is_configured": false, 00:21:26.477 "data_offset": 0, 00:21:26.477 "data_size": 65536 00:21:26.477 }, 00:21:26.477 { 00:21:26.477 "name": "BaseBdev3", 00:21:26.477 "uuid": "11d7f601-9476-4d6e-bbae-1aeebb918057", 00:21:26.477 "is_configured": true, 00:21:26.477 "data_offset": 0, 00:21:26.477 "data_size": 65536 00:21:26.477 }, 00:21:26.477 { 00:21:26.477 "name": "BaseBdev4", 00:21:26.477 "uuid": "c280cf61-6f54-47c7-80b7-77b0f5d7a23c", 00:21:26.477 "is_configured": true, 00:21:26.477 "data_offset": 0, 00:21:26.477 "data_size": 65536 00:21:26.477 } 00:21:26.477 ] 00:21:26.477 }' 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@657 -- # local timeout=432 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.477 05:18:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.736 05:18:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.736 "name": "raid_bdev1", 00:21:26.736 "uuid": "4fafad1d-3bc0-4d98-bb87-7d62bbf4663b", 00:21:26.736 "strip_size_kb": 0, 00:21:26.736 "state": "online", 00:21:26.736 "raid_level": "raid1", 00:21:26.736 "superblock": false, 00:21:26.736 "num_base_bdevs": 4, 00:21:26.736 "num_base_bdevs_discovered": 3, 00:21:26.736 "num_base_bdevs_operational": 3, 00:21:26.736 "process": { 00:21:26.736 "type": "rebuild", 00:21:26.736 "target": "spare", 00:21:26.736 "progress": { 00:21:26.736 "blocks": 40960, 00:21:26.736 "percent": 62 00:21:26.736 } 00:21:26.736 }, 00:21:26.736 "base_bdevs_list": [ 00:21:26.736 { 00:21:26.736 "name": "spare", 00:21:26.736 "uuid": "48e44ebd-2d69-500c-931e-d69dca2c5a36", 00:21:26.736 "is_configured": true, 00:21:26.736 "data_offset": 0, 00:21:26.736 "data_size": 65536 00:21:26.736 }, 00:21:26.736 { 00:21:26.736 "name": null, 00:21:26.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.736 "is_configured": false, 00:21:26.736 "data_offset": 0, 00:21:26.736 "data_size": 65536 00:21:26.736 }, 00:21:26.736 { 00:21:26.736 "name": "BaseBdev3", 00:21:26.736 "uuid": "11d7f601-9476-4d6e-bbae-1aeebb918057", 00:21:26.736 "is_configured": true, 00:21:26.736 "data_offset": 0, 00:21:26.736 "data_size": 65536 00:21:26.736 }, 00:21:26.736 { 00:21:26.736 "name": "BaseBdev4", 00:21:26.736 "uuid": "c280cf61-6f54-47c7-80b7-77b0f5d7a23c", 00:21:26.736 "is_configured": true, 00:21:26.736 "data_offset": 0, 00:21:26.736 "data_size": 65536 00:21:26.737 } 00:21:26.737 ] 00:21:26.737 }' 00:21:26.737 05:18:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.737 05:18:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.737 05:18:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.737 05:18:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.737 05:18:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:27.674 05:18:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:27.674 05:18:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.674 05:18:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.674 05:18:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:27.674 05:18:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:27.674 05:18:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.674 05:18:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.674 05:18:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.933 [2024-07-26 05:18:46.876622] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:27.933 [2024-07-26 05:18:46.876865] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:27.933 [2024-07-26 05:18:46.877063] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.933 05:18:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:27.933 "name": "raid_bdev1", 00:21:27.933 "uuid": "4fafad1d-3bc0-4d98-bb87-7d62bbf4663b", 00:21:27.933 "strip_size_kb": 0, 00:21:27.933 "state": "online", 00:21:27.933 "raid_level": "raid1", 00:21:27.933 "superblock": false, 00:21:27.933 "num_base_bdevs": 4, 00:21:27.933 "num_base_bdevs_discovered": 3, 00:21:27.933 "num_base_bdevs_operational": 3, 00:21:27.933 "base_bdevs_list": [ 00:21:27.933 { 00:21:27.933 "name": "spare", 00:21:27.933 "uuid": "48e44ebd-2d69-500c-931e-d69dca2c5a36", 00:21:27.933 "is_configured": true, 00:21:27.933 "data_offset": 0, 00:21:27.933 "data_size": 65536 00:21:27.933 }, 00:21:27.933 { 00:21:27.933 "name": null, 00:21:27.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.933 "is_configured": false, 00:21:27.933 "data_offset": 0, 00:21:27.933 "data_size": 65536 00:21:27.933 }, 00:21:27.933 { 00:21:27.933 "name": "BaseBdev3", 00:21:27.933 "uuid": "11d7f601-9476-4d6e-bbae-1aeebb918057", 00:21:27.933 "is_configured": true, 00:21:27.933 "data_offset": 0, 00:21:27.933 "data_size": 65536 00:21:27.933 }, 00:21:27.933 { 00:21:27.933 "name": "BaseBdev4", 00:21:27.933 "uuid": "c280cf61-6f54-47c7-80b7-77b0f5d7a23c", 00:21:27.933 "is_configured": true, 00:21:27.933 "data_offset": 0, 00:21:27.933 "data_size": 65536 00:21:27.933 } 00:21:27.933 ] 00:21:27.933 }' 00:21:27.933 05:18:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:27.933 05:18:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:27.933 05:18:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:27.933 05:18:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:27.933 05:18:46 -- bdev/bdev_raid.sh@660 -- # break 00:21:27.933 05:18:46 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:27.933 05:18:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.933 05:18:46 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:27.934 05:18:46 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:27.934 05:18:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.934 05:18:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.934 05:18:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:28.193 "name": "raid_bdev1", 00:21:28.193 "uuid": "4fafad1d-3bc0-4d98-bb87-7d62bbf4663b", 00:21:28.193 "strip_size_kb": 0, 00:21:28.193 "state": "online", 00:21:28.193 "raid_level": "raid1", 00:21:28.193 "superblock": false, 00:21:28.193 "num_base_bdevs": 4, 00:21:28.193 "num_base_bdevs_discovered": 3, 00:21:28.193 "num_base_bdevs_operational": 3, 00:21:28.193 "base_bdevs_list": [ 00:21:28.193 { 00:21:28.193 "name": "spare", 00:21:28.193 "uuid": "48e44ebd-2d69-500c-931e-d69dca2c5a36", 00:21:28.193 "is_configured": true, 00:21:28.193 "data_offset": 0, 00:21:28.193 "data_size": 65536 00:21:28.193 }, 00:21:28.193 { 00:21:28.193 "name": null, 00:21:28.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.193 "is_configured": false, 00:21:28.193 "data_offset": 0, 00:21:28.193 "data_size": 65536 00:21:28.193 }, 00:21:28.193 { 00:21:28.193 "name": "BaseBdev3", 00:21:28.193 "uuid": "11d7f601-9476-4d6e-bbae-1aeebb918057", 00:21:28.193 "is_configured": true, 00:21:28.193 "data_offset": 0, 00:21:28.193 "data_size": 65536 00:21:28.193 }, 00:21:28.193 { 00:21:28.193 "name": "BaseBdev4", 00:21:28.193 "uuid": "c280cf61-6f54-47c7-80b7-77b0f5d7a23c", 00:21:28.193 "is_configured": true, 00:21:28.193 "data_offset": 0, 00:21:28.193 "data_size": 65536 00:21:28.193 } 00:21:28.193 ] 00:21:28.193 }' 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.193 05:18:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.453 05:18:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:28.453 "name": "raid_bdev1", 00:21:28.453 "uuid": "4fafad1d-3bc0-4d98-bb87-7d62bbf4663b", 00:21:28.453 "strip_size_kb": 0, 00:21:28.453 "state": "online", 00:21:28.453 "raid_level": "raid1", 00:21:28.453 "superblock": false, 00:21:28.453 "num_base_bdevs": 4, 00:21:28.453 "num_base_bdevs_discovered": 3, 00:21:28.453 "num_base_bdevs_operational": 3, 00:21:28.453 "base_bdevs_list": [ 00:21:28.453 { 00:21:28.453 "name": "spare", 00:21:28.453 "uuid": "48e44ebd-2d69-500c-931e-d69dca2c5a36", 00:21:28.453 "is_configured": true, 00:21:28.453 "data_offset": 0, 00:21:28.453 "data_size": 65536 00:21:28.453 }, 00:21:28.453 { 00:21:28.453 "name": null, 00:21:28.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.453 "is_configured": false, 00:21:28.453 "data_offset": 0, 00:21:28.453 "data_size": 65536 00:21:28.453 }, 00:21:28.453 { 00:21:28.453 "name": "BaseBdev3", 00:21:28.453 "uuid": "11d7f601-9476-4d6e-bbae-1aeebb918057", 00:21:28.453 "is_configured": true, 00:21:28.453 "data_offset": 0, 00:21:28.453 "data_size": 65536 00:21:28.453 }, 00:21:28.453 { 00:21:28.453 "name": "BaseBdev4", 00:21:28.453 "uuid": "c280cf61-6f54-47c7-80b7-77b0f5d7a23c", 00:21:28.453 "is_configured": true, 00:21:28.453 "data_offset": 0, 00:21:28.453 "data_size": 65536 00:21:28.453 } 00:21:28.453 ] 00:21:28.453 }' 00:21:28.453 05:18:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:28.453 05:18:47 -- common/autotest_common.sh@10 -- # set +x 00:21:28.712 05:18:47 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:28.971 [2024-07-26 05:18:47.912295] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:28.971 [2024-07-26 05:18:47.912331] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:28.971 [2024-07-26 05:18:47.912404] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.971 [2024-07-26 05:18:47.912476] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:28.971 [2024-07-26 05:18:47.912492] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:21:28.971 05:18:47 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.971 05:18:47 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:29.231 05:18:48 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:29.231 05:18:48 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:29.231 05:18:48 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:29.231 05:18:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:29.231 05:18:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:29.231 05:18:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:29.231 05:18:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:29.231 05:18:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:29.231 05:18:48 -- bdev/nbd_common.sh@12 -- # local i 00:21:29.231 05:18:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:29.231 05:18:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:29.231 05:18:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:29.490 /dev/nbd0 00:21:29.490 05:18:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:29.490 05:18:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:29.490 05:18:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:29.490 05:18:48 -- common/autotest_common.sh@857 -- # local i 00:21:29.490 05:18:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:29.490 05:18:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:29.490 05:18:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:29.490 05:18:48 -- common/autotest_common.sh@861 -- # break 00:21:29.490 05:18:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:29.490 05:18:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:29.490 05:18:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:29.490 1+0 records in 00:21:29.490 1+0 records out 00:21:29.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454055 s, 9.0 MB/s 00:21:29.490 05:18:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.490 05:18:48 -- common/autotest_common.sh@874 -- # size=4096 00:21:29.490 05:18:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.490 05:18:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:29.490 05:18:48 -- common/autotest_common.sh@877 -- # return 0 00:21:29.490 05:18:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:29.490 05:18:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:29.490 05:18:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:29.749 /dev/nbd1 00:21:29.749 05:18:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:29.749 05:18:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:29.749 05:18:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:29.749 05:18:48 -- common/autotest_common.sh@857 -- # local i 00:21:29.750 05:18:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:29.750 05:18:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:29.750 05:18:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:29.750 05:18:48 -- common/autotest_common.sh@861 -- # break 00:21:29.750 05:18:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:29.750 05:18:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:29.750 05:18:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:29.750 1+0 records in 00:21:29.750 1+0 records out 00:21:29.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357028 s, 11.5 MB/s 00:21:29.750 05:18:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.750 05:18:48 -- common/autotest_common.sh@874 -- # size=4096 00:21:29.750 05:18:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.750 05:18:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:29.750 05:18:48 -- common/autotest_common.sh@877 -- # return 0 00:21:29.750 05:18:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:29.750 05:18:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:29.750 05:18:48 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:30.009 05:18:48 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:30.009 05:18:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:30.009 05:18:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:30.009 05:18:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:30.009 05:18:48 -- bdev/nbd_common.sh@51 -- # local i 00:21:30.009 05:18:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:30.009 05:18:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:30.009 05:18:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:30.009 05:18:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:30.009 05:18:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:30.009 05:18:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:30.009 05:18:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:30.009 05:18:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:30.009 05:18:49 -- bdev/nbd_common.sh@41 -- # break 00:21:30.009 05:18:49 -- bdev/nbd_common.sh@45 -- # return 0 00:21:30.009 05:18:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:30.009 05:18:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:30.269 05:18:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:30.269 05:18:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:30.269 05:18:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:30.269 05:18:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:30.269 05:18:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:30.269 05:18:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:30.269 05:18:49 -- bdev/nbd_common.sh@41 -- # break 00:21:30.269 05:18:49 -- bdev/nbd_common.sh@45 -- # return 0 00:21:30.269 05:18:49 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:30.269 05:18:49 -- bdev/bdev_raid.sh@709 -- # killprocess 80135 00:21:30.269 05:18:49 -- common/autotest_common.sh@926 -- # '[' -z 80135 ']' 00:21:30.269 05:18:49 -- common/autotest_common.sh@930 -- # kill -0 80135 00:21:30.269 05:18:49 -- common/autotest_common.sh@931 -- # uname 00:21:30.269 05:18:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:30.269 05:18:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80135 00:21:30.269 killing process with pid 80135 00:21:30.269 Received shutdown signal, test time was about 60.000000 seconds 00:21:30.269 00:21:30.269 Latency(us) 00:21:30.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.269 =================================================================================================================== 00:21:30.269 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:30.269 05:18:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:30.269 05:18:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:30.269 05:18:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80135' 00:21:30.269 05:18:49 -- common/autotest_common.sh@945 -- # kill 80135 00:21:30.269 [2024-07-26 05:18:49.363692] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:30.269 05:18:49 -- common/autotest_common.sh@950 -- # wait 80135 00:21:30.838 [2024-07-26 05:18:49.682414] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:31.799 ************************************ 00:21:31.799 END TEST raid_rebuild_test 00:21:31.799 ************************************ 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:31.799 00:21:31.799 real 0m20.423s 00:21:31.799 user 0m25.888s 00:21:31.799 sys 0m3.815s 00:21:31.799 05:18:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.799 05:18:50 -- common/autotest_common.sh@10 -- # set +x 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:21:31.799 05:18:50 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:31.799 05:18:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:31.799 05:18:50 -- common/autotest_common.sh@10 -- # set +x 00:21:31.799 ************************************ 00:21:31.799 START TEST raid_rebuild_test_sb 00:21:31.799 ************************************ 00:21:31.799 05:18:50 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@544 -- # raid_pid=80631 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@545 -- # waitforlisten 80631 /var/tmp/spdk-raid.sock 00:21:31.799 05:18:50 -- common/autotest_common.sh@819 -- # '[' -z 80631 ']' 00:21:31.799 05:18:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:31.799 05:18:50 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:31.799 05:18:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:31.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:31.799 05:18:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:31.799 05:18:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:31.799 05:18:50 -- common/autotest_common.sh@10 -- # set +x 00:21:31.799 [2024-07-26 05:18:50.733418] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:31.799 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:31.799 Zero copy mechanism will not be used. 00:21:31.799 [2024-07-26 05:18:50.733593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80631 ] 00:21:32.071 [2024-07-26 05:18:50.899373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.071 [2024-07-26 05:18:51.047927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.330 [2024-07-26 05:18:51.192718] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.589 05:18:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:32.589 05:18:51 -- common/autotest_common.sh@852 -- # return 0 00:21:32.589 05:18:51 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.589 05:18:51 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:32.589 05:18:51 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:32.849 BaseBdev1_malloc 00:21:32.849 05:18:51 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:33.108 [2024-07-26 05:18:52.058558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:33.108 [2024-07-26 05:18:52.058659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.108 [2024-07-26 05:18:52.058691] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:21:33.108 [2024-07-26 05:18:52.058707] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.108 [2024-07-26 05:18:52.060893] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.108 [2024-07-26 05:18:52.060967] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:33.108 BaseBdev1 00:21:33.108 05:18:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:33.108 05:18:52 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:33.108 05:18:52 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:33.367 BaseBdev2_malloc 00:21:33.367 05:18:52 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:33.626 [2024-07-26 05:18:52.524281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:33.626 [2024-07-26 05:18:52.524361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.626 [2024-07-26 05:18:52.524427] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:21:33.626 [2024-07-26 05:18:52.524445] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.626 [2024-07-26 05:18:52.526855] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.626 [2024-07-26 05:18:52.526920] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:33.626 BaseBdev2 00:21:33.626 05:18:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:33.626 05:18:52 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:33.626 05:18:52 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:33.885 BaseBdev3_malloc 00:21:33.885 05:18:52 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:33.885 [2024-07-26 05:18:52.977206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:33.885 [2024-07-26 05:18:52.977286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.885 [2024-07-26 05:18:52.977314] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:21:33.885 [2024-07-26 05:18:52.977329] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.885 [2024-07-26 05:18:52.979538] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.885 [2024-07-26 05:18:52.979595] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:33.885 BaseBdev3 00:21:33.885 05:18:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:33.885 05:18:52 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:33.885 05:18:52 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:34.145 BaseBdev4_malloc 00:21:34.145 05:18:53 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:34.412 [2024-07-26 05:18:53.349200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:34.412 [2024-07-26 05:18:53.349291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.412 [2024-07-26 05:18:53.349320] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:21:34.412 [2024-07-26 05:18:53.349335] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.412 [2024-07-26 05:18:53.352132] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.412 [2024-07-26 05:18:53.352191] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:34.412 BaseBdev4 00:21:34.412 05:18:53 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:34.670 spare_malloc 00:21:34.670 05:18:53 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:34.929 spare_delay 00:21:34.929 05:18:53 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:34.929 [2024-07-26 05:18:53.957612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:34.929 [2024-07-26 05:18:53.957701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.929 [2024-07-26 05:18:53.957734] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:21:34.929 [2024-07-26 05:18:53.957749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.929 [2024-07-26 05:18:53.960106] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.929 [2024-07-26 05:18:53.960166] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:34.929 spare 00:21:34.929 05:18:53 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:35.187 [2024-07-26 05:18:54.141680] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:35.187 [2024-07-26 05:18:54.143480] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.187 [2024-07-26 05:18:54.143553] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:35.187 [2024-07-26 05:18:54.143617] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:35.187 [2024-07-26 05:18:54.143857] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:21:35.187 [2024-07-26 05:18:54.143877] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:35.187 [2024-07-26 05:18:54.143982] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:21:35.187 [2024-07-26 05:18:54.144389] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:21:35.187 [2024-07-26 05:18:54.144406] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:21:35.187 [2024-07-26 05:18:54.144560] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.187 05:18:54 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:35.187 05:18:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:35.187 05:18:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:35.187 05:18:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:35.187 05:18:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:35.188 05:18:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:35.188 05:18:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:35.188 05:18:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:35.188 05:18:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:35.188 05:18:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:35.188 05:18:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.188 05:18:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.446 05:18:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.446 "name": "raid_bdev1", 00:21:35.446 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:35.446 "strip_size_kb": 0, 00:21:35.446 "state": "online", 00:21:35.446 "raid_level": "raid1", 00:21:35.446 "superblock": true, 00:21:35.446 "num_base_bdevs": 4, 00:21:35.446 "num_base_bdevs_discovered": 4, 00:21:35.446 "num_base_bdevs_operational": 4, 00:21:35.446 "base_bdevs_list": [ 00:21:35.446 { 00:21:35.446 "name": "BaseBdev1", 00:21:35.446 "uuid": "92fa4818-17cf-5336-8010-cd397311a792", 00:21:35.446 "is_configured": true, 00:21:35.446 "data_offset": 2048, 00:21:35.446 "data_size": 63488 00:21:35.446 }, 00:21:35.446 { 00:21:35.446 "name": "BaseBdev2", 00:21:35.446 "uuid": "683e444c-b940-5916-9adf-a699d07f8938", 00:21:35.446 "is_configured": true, 00:21:35.446 "data_offset": 2048, 00:21:35.446 "data_size": 63488 00:21:35.446 }, 00:21:35.446 { 00:21:35.446 "name": "BaseBdev3", 00:21:35.446 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:35.446 "is_configured": true, 00:21:35.446 "data_offset": 2048, 00:21:35.446 "data_size": 63488 00:21:35.446 }, 00:21:35.446 { 00:21:35.446 "name": "BaseBdev4", 00:21:35.446 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:35.446 "is_configured": true, 00:21:35.446 "data_offset": 2048, 00:21:35.446 "data_size": 63488 00:21:35.446 } 00:21:35.446 ] 00:21:35.446 }' 00:21:35.446 05:18:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.446 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:21:35.704 05:18:54 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:35.704 05:18:54 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:35.963 [2024-07-26 05:18:54.873961] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.963 05:18:54 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:35.963 05:18:54 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.963 05:18:54 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:36.222 05:18:55 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:36.222 05:18:55 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:36.222 05:18:55 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:36.222 05:18:55 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:36.222 05:18:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:36.222 05:18:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:36.222 05:18:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:36.222 05:18:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:36.222 05:18:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:36.222 05:18:55 -- bdev/nbd_common.sh@12 -- # local i 00:21:36.222 05:18:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:36.222 05:18:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.222 05:18:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:36.222 [2024-07-26 05:18:55.321934] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:21:36.480 /dev/nbd0 00:21:36.480 05:18:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:36.480 05:18:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:36.480 05:18:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:36.480 05:18:55 -- common/autotest_common.sh@857 -- # local i 00:21:36.480 05:18:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:36.480 05:18:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:36.480 05:18:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:36.480 05:18:55 -- common/autotest_common.sh@861 -- # break 00:21:36.480 05:18:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:36.480 05:18:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:36.480 05:18:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.480 1+0 records in 00:21:36.480 1+0 records out 00:21:36.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216938 s, 18.9 MB/s 00:21:36.480 05:18:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.480 05:18:55 -- common/autotest_common.sh@874 -- # size=4096 00:21:36.480 05:18:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.480 05:18:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:36.480 05:18:55 -- common/autotest_common.sh@877 -- # return 0 00:21:36.480 05:18:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.480 05:18:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.480 05:18:55 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:36.480 05:18:55 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:36.480 05:18:55 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:43.040 63488+0 records in 00:21:43.040 63488+0 records out 00:21:43.040 32505856 bytes (33 MB, 31 MiB) copied, 6.66495 s, 4.9 MB/s 00:21:43.040 05:19:02 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:43.040 05:19:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:43.040 05:19:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:43.040 05:19:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:43.040 05:19:02 -- bdev/nbd_common.sh@51 -- # local i 00:21:43.040 05:19:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:43.040 05:19:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:43.297 05:19:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:43.297 05:19:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:43.297 05:19:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:43.297 05:19:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:43.297 05:19:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:43.297 05:19:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:43.297 [2024-07-26 05:19:02.287083] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.297 05:19:02 -- bdev/nbd_common.sh@41 -- # break 00:21:43.297 05:19:02 -- bdev/nbd_common.sh@45 -- # return 0 00:21:43.297 05:19:02 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:43.555 [2024-07-26 05:19:02.527206] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.555 05:19:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.814 05:19:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:43.814 "name": "raid_bdev1", 00:21:43.814 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:43.814 "strip_size_kb": 0, 00:21:43.814 "state": "online", 00:21:43.814 "raid_level": "raid1", 00:21:43.814 "superblock": true, 00:21:43.814 "num_base_bdevs": 4, 00:21:43.814 "num_base_bdevs_discovered": 3, 00:21:43.814 "num_base_bdevs_operational": 3, 00:21:43.814 "base_bdevs_list": [ 00:21:43.814 { 00:21:43.814 "name": null, 00:21:43.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.814 "is_configured": false, 00:21:43.814 "data_offset": 2048, 00:21:43.814 "data_size": 63488 00:21:43.814 }, 00:21:43.814 { 00:21:43.814 "name": "BaseBdev2", 00:21:43.814 "uuid": "683e444c-b940-5916-9adf-a699d07f8938", 00:21:43.814 "is_configured": true, 00:21:43.814 "data_offset": 2048, 00:21:43.814 "data_size": 63488 00:21:43.814 }, 00:21:43.814 { 00:21:43.814 "name": "BaseBdev3", 00:21:43.814 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:43.814 "is_configured": true, 00:21:43.814 "data_offset": 2048, 00:21:43.814 "data_size": 63488 00:21:43.814 }, 00:21:43.814 { 00:21:43.814 "name": "BaseBdev4", 00:21:43.814 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:43.814 "is_configured": true, 00:21:43.814 "data_offset": 2048, 00:21:43.814 "data_size": 63488 00:21:43.814 } 00:21:43.814 ] 00:21:43.814 }' 00:21:43.814 05:19:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:43.814 05:19:02 -- common/autotest_common.sh@10 -- # set +x 00:21:44.072 05:19:02 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:44.331 [2024-07-26 05:19:03.195384] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:44.331 [2024-07-26 05:19:03.195439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:44.331 [2024-07-26 05:19:03.205809] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2db0 00:21:44.331 [2024-07-26 05:19:03.207854] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:44.331 05:19:03 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:45.265 05:19:04 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:45.265 05:19:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:45.265 05:19:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:45.265 05:19:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:45.265 05:19:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:45.265 05:19:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.265 05:19:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.523 05:19:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:45.523 "name": "raid_bdev1", 00:21:45.523 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:45.523 "strip_size_kb": 0, 00:21:45.523 "state": "online", 00:21:45.523 "raid_level": "raid1", 00:21:45.523 "superblock": true, 00:21:45.523 "num_base_bdevs": 4, 00:21:45.523 "num_base_bdevs_discovered": 4, 00:21:45.523 "num_base_bdevs_operational": 4, 00:21:45.523 "process": { 00:21:45.523 "type": "rebuild", 00:21:45.523 "target": "spare", 00:21:45.523 "progress": { 00:21:45.523 "blocks": 22528, 00:21:45.523 "percent": 35 00:21:45.523 } 00:21:45.523 }, 00:21:45.523 "base_bdevs_list": [ 00:21:45.523 { 00:21:45.523 "name": "spare", 00:21:45.523 "uuid": "30122453-2fa4-5adf-9415-b334498654c6", 00:21:45.523 "is_configured": true, 00:21:45.523 "data_offset": 2048, 00:21:45.523 "data_size": 63488 00:21:45.523 }, 00:21:45.523 { 00:21:45.523 "name": "BaseBdev2", 00:21:45.523 "uuid": "683e444c-b940-5916-9adf-a699d07f8938", 00:21:45.523 "is_configured": true, 00:21:45.523 "data_offset": 2048, 00:21:45.523 "data_size": 63488 00:21:45.523 }, 00:21:45.523 { 00:21:45.523 "name": "BaseBdev3", 00:21:45.523 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:45.523 "is_configured": true, 00:21:45.523 "data_offset": 2048, 00:21:45.523 "data_size": 63488 00:21:45.523 }, 00:21:45.523 { 00:21:45.523 "name": "BaseBdev4", 00:21:45.523 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:45.523 "is_configured": true, 00:21:45.523 "data_offset": 2048, 00:21:45.523 "data_size": 63488 00:21:45.523 } 00:21:45.523 ] 00:21:45.523 }' 00:21:45.523 05:19:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:45.523 05:19:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:45.523 05:19:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:45.523 05:19:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:45.523 05:19:04 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:45.782 [2024-07-26 05:19:04.646118] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:45.782 [2024-07-26 05:19:04.714480] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:45.782 [2024-07-26 05:19:04.714573] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.782 05:19:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.041 05:19:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:46.041 "name": "raid_bdev1", 00:21:46.041 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:46.041 "strip_size_kb": 0, 00:21:46.041 "state": "online", 00:21:46.041 "raid_level": "raid1", 00:21:46.041 "superblock": true, 00:21:46.041 "num_base_bdevs": 4, 00:21:46.041 "num_base_bdevs_discovered": 3, 00:21:46.041 "num_base_bdevs_operational": 3, 00:21:46.041 "base_bdevs_list": [ 00:21:46.041 { 00:21:46.041 "name": null, 00:21:46.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.041 "is_configured": false, 00:21:46.041 "data_offset": 2048, 00:21:46.041 "data_size": 63488 00:21:46.041 }, 00:21:46.041 { 00:21:46.041 "name": "BaseBdev2", 00:21:46.041 "uuid": "683e444c-b940-5916-9adf-a699d07f8938", 00:21:46.041 "is_configured": true, 00:21:46.041 "data_offset": 2048, 00:21:46.041 "data_size": 63488 00:21:46.041 }, 00:21:46.041 { 00:21:46.041 "name": "BaseBdev3", 00:21:46.041 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:46.041 "is_configured": true, 00:21:46.041 "data_offset": 2048, 00:21:46.041 "data_size": 63488 00:21:46.041 }, 00:21:46.041 { 00:21:46.041 "name": "BaseBdev4", 00:21:46.041 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:46.041 "is_configured": true, 00:21:46.041 "data_offset": 2048, 00:21:46.041 "data_size": 63488 00:21:46.041 } 00:21:46.041 ] 00:21:46.041 }' 00:21:46.041 05:19:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:46.041 05:19:04 -- common/autotest_common.sh@10 -- # set +x 00:21:46.301 05:19:05 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:46.301 05:19:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:46.301 05:19:05 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:46.301 05:19:05 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:46.301 05:19:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:46.301 05:19:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.301 05:19:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.560 05:19:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:46.560 "name": "raid_bdev1", 00:21:46.560 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:46.560 "strip_size_kb": 0, 00:21:46.560 "state": "online", 00:21:46.560 "raid_level": "raid1", 00:21:46.560 "superblock": true, 00:21:46.560 "num_base_bdevs": 4, 00:21:46.561 "num_base_bdevs_discovered": 3, 00:21:46.561 "num_base_bdevs_operational": 3, 00:21:46.561 "base_bdevs_list": [ 00:21:46.561 { 00:21:46.561 "name": null, 00:21:46.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.561 "is_configured": false, 00:21:46.561 "data_offset": 2048, 00:21:46.561 "data_size": 63488 00:21:46.561 }, 00:21:46.561 { 00:21:46.561 "name": "BaseBdev2", 00:21:46.561 "uuid": "683e444c-b940-5916-9adf-a699d07f8938", 00:21:46.561 "is_configured": true, 00:21:46.561 "data_offset": 2048, 00:21:46.561 "data_size": 63488 00:21:46.561 }, 00:21:46.561 { 00:21:46.561 "name": "BaseBdev3", 00:21:46.561 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:46.561 "is_configured": true, 00:21:46.561 "data_offset": 2048, 00:21:46.561 "data_size": 63488 00:21:46.561 }, 00:21:46.561 { 00:21:46.561 "name": "BaseBdev4", 00:21:46.561 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:46.561 "is_configured": true, 00:21:46.561 "data_offset": 2048, 00:21:46.561 "data_size": 63488 00:21:46.561 } 00:21:46.561 ] 00:21:46.561 }' 00:21:46.561 05:19:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:46.561 05:19:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:46.561 05:19:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:46.561 05:19:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:46.561 05:19:05 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:46.820 [2024-07-26 05:19:05.724839] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:46.820 [2024-07-26 05:19:05.724897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:46.820 [2024-07-26 05:19:05.735551] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2e80 00:21:46.820 [2024-07-26 05:19:05.737442] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.820 05:19:05 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:47.757 05:19:06 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.757 05:19:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:47.757 05:19:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:47.757 05:19:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:47.757 05:19:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:47.757 05:19:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.757 05:19:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.016 05:19:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:48.016 "name": "raid_bdev1", 00:21:48.016 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:48.016 "strip_size_kb": 0, 00:21:48.016 "state": "online", 00:21:48.016 "raid_level": "raid1", 00:21:48.016 "superblock": true, 00:21:48.016 "num_base_bdevs": 4, 00:21:48.016 "num_base_bdevs_discovered": 4, 00:21:48.016 "num_base_bdevs_operational": 4, 00:21:48.016 "process": { 00:21:48.016 "type": "rebuild", 00:21:48.016 "target": "spare", 00:21:48.016 "progress": { 00:21:48.016 "blocks": 22528, 00:21:48.016 "percent": 35 00:21:48.016 } 00:21:48.016 }, 00:21:48.016 "base_bdevs_list": [ 00:21:48.016 { 00:21:48.016 "name": "spare", 00:21:48.016 "uuid": "30122453-2fa4-5adf-9415-b334498654c6", 00:21:48.016 "is_configured": true, 00:21:48.016 "data_offset": 2048, 00:21:48.016 "data_size": 63488 00:21:48.016 }, 00:21:48.016 { 00:21:48.016 "name": "BaseBdev2", 00:21:48.016 "uuid": "683e444c-b940-5916-9adf-a699d07f8938", 00:21:48.016 "is_configured": true, 00:21:48.016 "data_offset": 2048, 00:21:48.016 "data_size": 63488 00:21:48.016 }, 00:21:48.016 { 00:21:48.016 "name": "BaseBdev3", 00:21:48.016 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:48.016 "is_configured": true, 00:21:48.016 "data_offset": 2048, 00:21:48.016 "data_size": 63488 00:21:48.016 }, 00:21:48.016 { 00:21:48.016 "name": "BaseBdev4", 00:21:48.016 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:48.016 "is_configured": true, 00:21:48.016 "data_offset": 2048, 00:21:48.016 "data_size": 63488 00:21:48.016 } 00:21:48.016 ] 00:21:48.016 }' 00:21:48.016 05:19:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:48.016 05:19:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.016 05:19:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:48.016 05:19:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.016 05:19:06 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:48.016 05:19:06 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:48.016 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:48.016 05:19:06 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:48.016 05:19:06 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:48.016 05:19:06 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:48.016 05:19:06 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:48.276 [2024-07-26 05:19:07.139744] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:48.276 [2024-07-26 05:19:07.143685] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000ca2e80 00:21:48.276 05:19:07 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:48.276 05:19:07 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:48.276 05:19:07 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.276 05:19:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.276 05:19:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:48.276 05:19:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:48.276 05:19:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.276 05:19:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.276 05:19:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:48.535 "name": "raid_bdev1", 00:21:48.535 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:48.535 "strip_size_kb": 0, 00:21:48.535 "state": "online", 00:21:48.535 "raid_level": "raid1", 00:21:48.535 "superblock": true, 00:21:48.535 "num_base_bdevs": 4, 00:21:48.535 "num_base_bdevs_discovered": 3, 00:21:48.535 "num_base_bdevs_operational": 3, 00:21:48.535 "process": { 00:21:48.535 "type": "rebuild", 00:21:48.535 "target": "spare", 00:21:48.535 "progress": { 00:21:48.535 "blocks": 34816, 00:21:48.535 "percent": 54 00:21:48.535 } 00:21:48.535 }, 00:21:48.535 "base_bdevs_list": [ 00:21:48.535 { 00:21:48.535 "name": "spare", 00:21:48.535 "uuid": "30122453-2fa4-5adf-9415-b334498654c6", 00:21:48.535 "is_configured": true, 00:21:48.535 "data_offset": 2048, 00:21:48.535 "data_size": 63488 00:21:48.535 }, 00:21:48.535 { 00:21:48.535 "name": null, 00:21:48.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.535 "is_configured": false, 00:21:48.535 "data_offset": 2048, 00:21:48.535 "data_size": 63488 00:21:48.535 }, 00:21:48.535 { 00:21:48.535 "name": "BaseBdev3", 00:21:48.535 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:48.535 "is_configured": true, 00:21:48.535 "data_offset": 2048, 00:21:48.535 "data_size": 63488 00:21:48.535 }, 00:21:48.535 { 00:21:48.535 "name": "BaseBdev4", 00:21:48.535 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:48.535 "is_configured": true, 00:21:48.535 "data_offset": 2048, 00:21:48.535 "data_size": 63488 00:21:48.535 } 00:21:48.535 ] 00:21:48.535 }' 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@657 -- # local timeout=454 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.535 05:19:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.794 05:19:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:48.794 "name": "raid_bdev1", 00:21:48.794 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:48.794 "strip_size_kb": 0, 00:21:48.794 "state": "online", 00:21:48.794 "raid_level": "raid1", 00:21:48.794 "superblock": true, 00:21:48.794 "num_base_bdevs": 4, 00:21:48.794 "num_base_bdevs_discovered": 3, 00:21:48.794 "num_base_bdevs_operational": 3, 00:21:48.794 "process": { 00:21:48.794 "type": "rebuild", 00:21:48.794 "target": "spare", 00:21:48.794 "progress": { 00:21:48.794 "blocks": 38912, 00:21:48.794 "percent": 61 00:21:48.794 } 00:21:48.794 }, 00:21:48.794 "base_bdevs_list": [ 00:21:48.794 { 00:21:48.794 "name": "spare", 00:21:48.794 "uuid": "30122453-2fa4-5adf-9415-b334498654c6", 00:21:48.794 "is_configured": true, 00:21:48.794 "data_offset": 2048, 00:21:48.794 "data_size": 63488 00:21:48.794 }, 00:21:48.794 { 00:21:48.794 "name": null, 00:21:48.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.794 "is_configured": false, 00:21:48.794 "data_offset": 2048, 00:21:48.794 "data_size": 63488 00:21:48.794 }, 00:21:48.794 { 00:21:48.794 "name": "BaseBdev3", 00:21:48.794 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:48.794 "is_configured": true, 00:21:48.794 "data_offset": 2048, 00:21:48.794 "data_size": 63488 00:21:48.794 }, 00:21:48.795 { 00:21:48.795 "name": "BaseBdev4", 00:21:48.795 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:48.795 "is_configured": true, 00:21:48.795 "data_offset": 2048, 00:21:48.795 "data_size": 63488 00:21:48.795 } 00:21:48.795 ] 00:21:48.795 }' 00:21:48.795 05:19:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:48.795 05:19:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.795 05:19:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:48.795 05:19:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.795 05:19:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:49.732 05:19:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:49.732 05:19:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:49.732 05:19:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:49.732 05:19:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:49.733 05:19:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:49.733 05:19:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:49.733 05:19:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.733 05:19:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.992 [2024-07-26 05:19:08.851318] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:49.992 [2024-07-26 05:19:08.851460] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:49.992 [2024-07-26 05:19:08.851593] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.992 "name": "raid_bdev1", 00:21:49.992 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:49.992 "strip_size_kb": 0, 00:21:49.992 "state": "online", 00:21:49.992 "raid_level": "raid1", 00:21:49.992 "superblock": true, 00:21:49.992 "num_base_bdevs": 4, 00:21:49.992 "num_base_bdevs_discovered": 3, 00:21:49.992 "num_base_bdevs_operational": 3, 00:21:49.992 "base_bdevs_list": [ 00:21:49.992 { 00:21:49.992 "name": "spare", 00:21:49.992 "uuid": "30122453-2fa4-5adf-9415-b334498654c6", 00:21:49.992 "is_configured": true, 00:21:49.992 "data_offset": 2048, 00:21:49.992 "data_size": 63488 00:21:49.992 }, 00:21:49.992 { 00:21:49.992 "name": null, 00:21:49.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.992 "is_configured": false, 00:21:49.992 "data_offset": 2048, 00:21:49.992 "data_size": 63488 00:21:49.992 }, 00:21:49.992 { 00:21:49.992 "name": "BaseBdev3", 00:21:49.992 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:49.992 "is_configured": true, 00:21:49.992 "data_offset": 2048, 00:21:49.992 "data_size": 63488 00:21:49.992 }, 00:21:49.992 { 00:21:49.992 "name": "BaseBdev4", 00:21:49.992 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:49.992 "is_configured": true, 00:21:49.992 "data_offset": 2048, 00:21:49.992 "data_size": 63488 00:21:49.992 } 00:21:49.992 ] 00:21:49.992 }' 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@660 -- # break 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.992 05:19:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.251 "name": "raid_bdev1", 00:21:50.251 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:50.251 "strip_size_kb": 0, 00:21:50.251 "state": "online", 00:21:50.251 "raid_level": "raid1", 00:21:50.251 "superblock": true, 00:21:50.251 "num_base_bdevs": 4, 00:21:50.251 "num_base_bdevs_discovered": 3, 00:21:50.251 "num_base_bdevs_operational": 3, 00:21:50.251 "base_bdevs_list": [ 00:21:50.251 { 00:21:50.251 "name": "spare", 00:21:50.251 "uuid": "30122453-2fa4-5adf-9415-b334498654c6", 00:21:50.251 "is_configured": true, 00:21:50.251 "data_offset": 2048, 00:21:50.251 "data_size": 63488 00:21:50.251 }, 00:21:50.251 { 00:21:50.251 "name": null, 00:21:50.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.251 "is_configured": false, 00:21:50.251 "data_offset": 2048, 00:21:50.251 "data_size": 63488 00:21:50.251 }, 00:21:50.251 { 00:21:50.251 "name": "BaseBdev3", 00:21:50.251 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:50.251 "is_configured": true, 00:21:50.251 "data_offset": 2048, 00:21:50.251 "data_size": 63488 00:21:50.251 }, 00:21:50.251 { 00:21:50.251 "name": "BaseBdev4", 00:21:50.251 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:50.251 "is_configured": true, 00:21:50.251 "data_offset": 2048, 00:21:50.251 "data_size": 63488 00:21:50.251 } 00:21:50.251 ] 00:21:50.251 }' 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.251 05:19:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.510 05:19:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:50.510 "name": "raid_bdev1", 00:21:50.510 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:50.510 "strip_size_kb": 0, 00:21:50.510 "state": "online", 00:21:50.510 "raid_level": "raid1", 00:21:50.510 "superblock": true, 00:21:50.510 "num_base_bdevs": 4, 00:21:50.510 "num_base_bdevs_discovered": 3, 00:21:50.510 "num_base_bdevs_operational": 3, 00:21:50.510 "base_bdevs_list": [ 00:21:50.510 { 00:21:50.510 "name": "spare", 00:21:50.510 "uuid": "30122453-2fa4-5adf-9415-b334498654c6", 00:21:50.510 "is_configured": true, 00:21:50.510 "data_offset": 2048, 00:21:50.510 "data_size": 63488 00:21:50.510 }, 00:21:50.510 { 00:21:50.510 "name": null, 00:21:50.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.510 "is_configured": false, 00:21:50.510 "data_offset": 2048, 00:21:50.510 "data_size": 63488 00:21:50.510 }, 00:21:50.510 { 00:21:50.510 "name": "BaseBdev3", 00:21:50.510 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:50.510 "is_configured": true, 00:21:50.510 "data_offset": 2048, 00:21:50.510 "data_size": 63488 00:21:50.510 }, 00:21:50.510 { 00:21:50.510 "name": "BaseBdev4", 00:21:50.510 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:50.510 "is_configured": true, 00:21:50.510 "data_offset": 2048, 00:21:50.510 "data_size": 63488 00:21:50.510 } 00:21:50.510 ] 00:21:50.510 }' 00:21:50.510 05:19:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:50.510 05:19:09 -- common/autotest_common.sh@10 -- # set +x 00:21:50.769 05:19:09 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:51.028 [2024-07-26 05:19:09.975693] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:51.028 [2024-07-26 05:19:09.975726] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:51.028 [2024-07-26 05:19:09.975805] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:51.028 [2024-07-26 05:19:09.975891] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:51.028 [2024-07-26 05:19:09.975906] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:21:51.028 05:19:09 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.028 05:19:09 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:51.287 05:19:10 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:51.287 05:19:10 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:51.287 05:19:10 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:51.287 05:19:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:51.287 05:19:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:51.287 05:19:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:51.287 05:19:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:51.287 05:19:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:51.287 05:19:10 -- bdev/nbd_common.sh@12 -- # local i 00:21:51.287 05:19:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:51.287 05:19:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:51.287 05:19:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:51.546 /dev/nbd0 00:21:51.546 05:19:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:51.546 05:19:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:51.546 05:19:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:51.546 05:19:10 -- common/autotest_common.sh@857 -- # local i 00:21:51.546 05:19:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:51.546 05:19:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:51.546 05:19:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:51.546 05:19:10 -- common/autotest_common.sh@861 -- # break 00:21:51.546 05:19:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:51.546 05:19:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:51.546 05:19:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:51.546 1+0 records in 00:21:51.546 1+0 records out 00:21:51.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189446 s, 21.6 MB/s 00:21:51.546 05:19:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:51.546 05:19:10 -- common/autotest_common.sh@874 -- # size=4096 00:21:51.546 05:19:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:51.546 05:19:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:51.546 05:19:10 -- common/autotest_common.sh@877 -- # return 0 00:21:51.546 05:19:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:51.546 05:19:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:51.546 05:19:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:51.546 /dev/nbd1 00:21:51.805 05:19:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:51.805 05:19:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:51.805 05:19:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:51.805 05:19:10 -- common/autotest_common.sh@857 -- # local i 00:21:51.805 05:19:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:51.805 05:19:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:51.805 05:19:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:51.805 05:19:10 -- common/autotest_common.sh@861 -- # break 00:21:51.805 05:19:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:51.805 05:19:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:51.805 05:19:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:51.805 1+0 records in 00:21:51.805 1+0 records out 00:21:51.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360874 s, 11.4 MB/s 00:21:51.805 05:19:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:51.805 05:19:10 -- common/autotest_common.sh@874 -- # size=4096 00:21:51.805 05:19:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:51.805 05:19:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:51.805 05:19:10 -- common/autotest_common.sh@877 -- # return 0 00:21:51.805 05:19:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:51.805 05:19:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:51.805 05:19:10 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:51.805 05:19:10 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:51.805 05:19:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:51.805 05:19:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:51.805 05:19:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:51.805 05:19:10 -- bdev/nbd_common.sh@51 -- # local i 00:21:51.805 05:19:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:51.806 05:19:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:52.065 05:19:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:52.065 05:19:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:52.065 05:19:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:52.065 05:19:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:52.065 05:19:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:52.065 05:19:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:52.065 05:19:11 -- bdev/nbd_common.sh@41 -- # break 00:21:52.065 05:19:11 -- bdev/nbd_common.sh@45 -- # return 0 00:21:52.065 05:19:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:52.065 05:19:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:52.324 05:19:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:52.324 05:19:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:52.324 05:19:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:52.324 05:19:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:52.324 05:19:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:52.324 05:19:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:52.324 05:19:11 -- bdev/nbd_common.sh@41 -- # break 00:21:52.324 05:19:11 -- bdev/nbd_common.sh@45 -- # return 0 00:21:52.324 05:19:11 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:52.324 05:19:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:52.324 05:19:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:52.324 05:19:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:52.609 05:19:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:52.879 [2024-07-26 05:19:11.851410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:52.879 [2024-07-26 05:19:11.851674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.879 [2024-07-26 05:19:11.851755] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:21:52.879 [2024-07-26 05:19:11.851930] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.879 [2024-07-26 05:19:11.854238] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.879 [2024-07-26 05:19:11.854405] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:52.879 [2024-07-26 05:19:11.854684] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:52.880 [2024-07-26 05:19:11.854892] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:52.880 BaseBdev1 00:21:52.880 05:19:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:52.880 05:19:11 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:52.880 05:19:11 -- bdev/bdev_raid.sh@696 -- # continue 00:21:52.880 05:19:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:52.880 05:19:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:52.880 05:19:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:53.139 05:19:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:53.139 [2024-07-26 05:19:12.223443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:53.139 [2024-07-26 05:19:12.223649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.139 [2024-07-26 05:19:12.223722] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:21:53.139 [2024-07-26 05:19:12.223829] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.139 [2024-07-26 05:19:12.224384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.139 [2024-07-26 05:19:12.224567] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:53.139 [2024-07-26 05:19:12.224758] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:53.139 [2024-07-26 05:19:12.224867] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:53.139 [2024-07-26 05:19:12.225111] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:53.139 [2024-07-26 05:19:12.225229] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:21:53.139 [2024-07-26 05:19:12.225475] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:53.139 BaseBdev3 00:21:53.139 05:19:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:53.139 05:19:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:53.139 05:19:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:53.397 05:19:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:53.657 [2024-07-26 05:19:12.643528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:53.657 [2024-07-26 05:19:12.643590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.657 [2024-07-26 05:19:12.643618] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:21:53.657 [2024-07-26 05:19:12.643632] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.657 [2024-07-26 05:19:12.644126] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.657 [2024-07-26 05:19:12.644172] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:53.657 [2024-07-26 05:19:12.644268] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:53.657 [2024-07-26 05:19:12.644310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:53.657 BaseBdev4 00:21:53.657 05:19:12 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:53.915 05:19:12 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:53.915 [2024-07-26 05:19:13.015610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:53.915 [2024-07-26 05:19:13.015691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.915 [2024-07-26 05:19:13.015720] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:21:53.915 [2024-07-26 05:19:13.015735] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.916 [2024-07-26 05:19:13.016234] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.916 [2024-07-26 05:19:13.016272] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:53.916 [2024-07-26 05:19:13.016422] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:53.916 [2024-07-26 05:19:13.016457] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:53.916 spare 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.175 [2024-07-26 05:19:13.116578] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c080 00:21:54.175 [2024-07-26 05:19:13.116612] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:54.175 [2024-07-26 05:19:13.116739] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1530 00:21:54.175 [2024-07-26 05:19:13.117131] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c080 00:21:54.175 [2024-07-26 05:19:13.117148] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c080 00:21:54.175 [2024-07-26 05:19:13.117291] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.175 05:19:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:54.175 "name": "raid_bdev1", 00:21:54.175 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:54.175 "strip_size_kb": 0, 00:21:54.175 "state": "online", 00:21:54.175 "raid_level": "raid1", 00:21:54.175 "superblock": true, 00:21:54.175 "num_base_bdevs": 4, 00:21:54.175 "num_base_bdevs_discovered": 3, 00:21:54.175 "num_base_bdevs_operational": 3, 00:21:54.175 "base_bdevs_list": [ 00:21:54.175 { 00:21:54.175 "name": "spare", 00:21:54.175 "uuid": "30122453-2fa4-5adf-9415-b334498654c6", 00:21:54.175 "is_configured": true, 00:21:54.175 "data_offset": 2048, 00:21:54.175 "data_size": 63488 00:21:54.175 }, 00:21:54.175 { 00:21:54.175 "name": null, 00:21:54.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.175 "is_configured": false, 00:21:54.175 "data_offset": 2048, 00:21:54.176 "data_size": 63488 00:21:54.176 }, 00:21:54.176 { 00:21:54.176 "name": "BaseBdev3", 00:21:54.176 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:54.176 "is_configured": true, 00:21:54.176 "data_offset": 2048, 00:21:54.176 "data_size": 63488 00:21:54.176 }, 00:21:54.176 { 00:21:54.176 "name": "BaseBdev4", 00:21:54.176 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:54.176 "is_configured": true, 00:21:54.176 "data_offset": 2048, 00:21:54.176 "data_size": 63488 00:21:54.176 } 00:21:54.176 ] 00:21:54.176 }' 00:21:54.176 05:19:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:54.176 05:19:13 -- common/autotest_common.sh@10 -- # set +x 00:21:54.452 05:19:13 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.452 05:19:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:54.452 05:19:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:54.452 05:19:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:54.452 05:19:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:54.452 05:19:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.452 05:19:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.716 05:19:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:54.716 "name": "raid_bdev1", 00:21:54.716 "uuid": "d305f017-7eda-4bb9-a5cc-992a4e492b64", 00:21:54.716 "strip_size_kb": 0, 00:21:54.716 "state": "online", 00:21:54.716 "raid_level": "raid1", 00:21:54.716 "superblock": true, 00:21:54.716 "num_base_bdevs": 4, 00:21:54.716 "num_base_bdevs_discovered": 3, 00:21:54.716 "num_base_bdevs_operational": 3, 00:21:54.716 "base_bdevs_list": [ 00:21:54.716 { 00:21:54.716 "name": "spare", 00:21:54.716 "uuid": "30122453-2fa4-5adf-9415-b334498654c6", 00:21:54.716 "is_configured": true, 00:21:54.716 "data_offset": 2048, 00:21:54.716 "data_size": 63488 00:21:54.716 }, 00:21:54.716 { 00:21:54.716 "name": null, 00:21:54.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.716 "is_configured": false, 00:21:54.716 "data_offset": 2048, 00:21:54.716 "data_size": 63488 00:21:54.716 }, 00:21:54.716 { 00:21:54.716 "name": "BaseBdev3", 00:21:54.716 "uuid": "f9dbe555-db60-5dfe-9995-260ca34deccc", 00:21:54.716 "is_configured": true, 00:21:54.716 "data_offset": 2048, 00:21:54.716 "data_size": 63488 00:21:54.716 }, 00:21:54.716 { 00:21:54.716 "name": "BaseBdev4", 00:21:54.716 "uuid": "330298ea-90dd-502d-af41-331c79693aff", 00:21:54.716 "is_configured": true, 00:21:54.716 "data_offset": 2048, 00:21:54.716 "data_size": 63488 00:21:54.716 } 00:21:54.716 ] 00:21:54.716 }' 00:21:54.716 05:19:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:54.716 05:19:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:54.716 05:19:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:54.716 05:19:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:54.716 05:19:13 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.716 05:19:13 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:54.976 05:19:14 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.976 05:19:14 -- bdev/bdev_raid.sh@709 -- # killprocess 80631 00:21:54.976 05:19:14 -- common/autotest_common.sh@926 -- # '[' -z 80631 ']' 00:21:54.976 05:19:14 -- common/autotest_common.sh@930 -- # kill -0 80631 00:21:54.976 05:19:14 -- common/autotest_common.sh@931 -- # uname 00:21:54.976 05:19:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:54.976 05:19:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80631 00:21:54.976 killing process with pid 80631 00:21:54.976 Received shutdown signal, test time was about 60.000000 seconds 00:21:54.976 00:21:54.976 Latency(us) 00:21:54.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.976 =================================================================================================================== 00:21:54.976 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:54.976 05:19:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:54.976 05:19:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:54.976 05:19:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80631' 00:21:54.976 05:19:14 -- common/autotest_common.sh@945 -- # kill 80631 00:21:54.976 [2024-07-26 05:19:14.057253] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:54.976 05:19:14 -- common/autotest_common.sh@950 -- # wait 80631 00:21:54.976 [2024-07-26 05:19:14.057359] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:54.976 [2024-07-26 05:19:14.057469] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:54.976 [2024-07-26 05:19:14.057486] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c080 name raid_bdev1, state offline 00:21:55.543 [2024-07-26 05:19:14.384306] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:56.481 00:21:56.481 real 0m24.626s 00:21:56.481 user 0m33.340s 00:21:56.481 sys 0m4.301s 00:21:56.481 05:19:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.481 05:19:15 -- common/autotest_common.sh@10 -- # set +x 00:21:56.481 ************************************ 00:21:56.481 END TEST raid_rebuild_test_sb 00:21:56.481 ************************************ 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:21:56.481 05:19:15 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:56.481 05:19:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:56.481 05:19:15 -- common/autotest_common.sh@10 -- # set +x 00:21:56.481 ************************************ 00:21:56.481 START TEST raid_rebuild_test_io 00:21:56.481 ************************************ 00:21:56.481 05:19:15 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@544 -- # raid_pid=81217 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:56.481 05:19:15 -- bdev/bdev_raid.sh@545 -- # waitforlisten 81217 /var/tmp/spdk-raid.sock 00:21:56.481 05:19:15 -- common/autotest_common.sh@819 -- # '[' -z 81217 ']' 00:21:56.481 05:19:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:56.481 05:19:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:56.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:56.481 05:19:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:56.481 05:19:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:56.481 05:19:15 -- common/autotest_common.sh@10 -- # set +x 00:21:56.481 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:56.481 Zero copy mechanism will not be used. 00:21:56.481 [2024-07-26 05:19:15.414993] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:56.481 [2024-07-26 05:19:15.415182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81217 ] 00:21:56.481 [2024-07-26 05:19:15.565633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.741 [2024-07-26 05:19:15.718905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.000 [2024-07-26 05:19:15.864331] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:57.259 05:19:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:57.259 05:19:16 -- common/autotest_common.sh@852 -- # return 0 00:21:57.259 05:19:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:57.259 05:19:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:57.259 05:19:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:57.518 BaseBdev1 00:21:57.518 05:19:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:57.518 05:19:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:57.518 05:19:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:57.777 BaseBdev2 00:21:57.777 05:19:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:57.777 05:19:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:57.777 05:19:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:58.036 BaseBdev3 00:21:58.036 05:19:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:58.036 05:19:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:58.036 05:19:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:58.295 BaseBdev4 00:21:58.295 05:19:17 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:58.554 spare_malloc 00:21:58.554 05:19:17 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:58.554 spare_delay 00:21:58.554 05:19:17 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:58.813 [2024-07-26 05:19:17.825415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:58.813 [2024-07-26 05:19:17.825499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.813 [2024-07-26 05:19:17.825529] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:21:58.813 [2024-07-26 05:19:17.825544] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.813 [2024-07-26 05:19:17.828166] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.813 [2024-07-26 05:19:17.828360] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:58.813 spare 00:21:58.813 05:19:17 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:59.072 [2024-07-26 05:19:18.005525] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:59.072 [2024-07-26 05:19:18.007502] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:59.072 [2024-07-26 05:19:18.007556] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:59.072 [2024-07-26 05:19:18.007604] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:59.072 [2024-07-26 05:19:18.007674] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:21:59.072 [2024-07-26 05:19:18.007691] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:59.072 [2024-07-26 05:19:18.007808] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:21:59.072 [2024-07-26 05:19:18.008176] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:21:59.072 [2024-07-26 05:19:18.008192] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:21:59.072 [2024-07-26 05:19:18.008349] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.072 05:19:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.331 05:19:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:59.331 "name": "raid_bdev1", 00:21:59.331 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:21:59.331 "strip_size_kb": 0, 00:21:59.331 "state": "online", 00:21:59.331 "raid_level": "raid1", 00:21:59.331 "superblock": false, 00:21:59.331 "num_base_bdevs": 4, 00:21:59.331 "num_base_bdevs_discovered": 4, 00:21:59.331 "num_base_bdevs_operational": 4, 00:21:59.331 "base_bdevs_list": [ 00:21:59.331 { 00:21:59.331 "name": "BaseBdev1", 00:21:59.331 "uuid": "efb221ed-3d81-474b-a27f-b74869787197", 00:21:59.331 "is_configured": true, 00:21:59.331 "data_offset": 0, 00:21:59.331 "data_size": 65536 00:21:59.331 }, 00:21:59.331 { 00:21:59.331 "name": "BaseBdev2", 00:21:59.331 "uuid": "2d6c2544-f3c0-4ecc-9653-789c0ce510c8", 00:21:59.331 "is_configured": true, 00:21:59.331 "data_offset": 0, 00:21:59.331 "data_size": 65536 00:21:59.331 }, 00:21:59.331 { 00:21:59.331 "name": "BaseBdev3", 00:21:59.331 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:21:59.331 "is_configured": true, 00:21:59.331 "data_offset": 0, 00:21:59.331 "data_size": 65536 00:21:59.331 }, 00:21:59.331 { 00:21:59.331 "name": "BaseBdev4", 00:21:59.331 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:21:59.331 "is_configured": true, 00:21:59.331 "data_offset": 0, 00:21:59.331 "data_size": 65536 00:21:59.331 } 00:21:59.331 ] 00:21:59.331 }' 00:21:59.331 05:19:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:59.331 05:19:18 -- common/autotest_common.sh@10 -- # set +x 00:21:59.590 05:19:18 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:59.590 05:19:18 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:59.848 [2024-07-26 05:19:18.737908] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:59.848 05:19:18 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:59.848 05:19:18 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.848 05:19:18 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:59.848 05:19:18 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:59.848 05:19:18 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:59.848 05:19:18 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:59.848 05:19:18 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:00.107 [2024-07-26 05:19:19.079873] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:22:00.107 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:00.107 Zero copy mechanism will not be used. 00:22:00.107 Running I/O for 60 seconds... 00:22:00.107 [2024-07-26 05:19:19.124492] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:00.107 [2024-07-26 05:19:19.136939] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.107 05:19:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.366 05:19:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:00.366 "name": "raid_bdev1", 00:22:00.366 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:00.366 "strip_size_kb": 0, 00:22:00.366 "state": "online", 00:22:00.366 "raid_level": "raid1", 00:22:00.366 "superblock": false, 00:22:00.366 "num_base_bdevs": 4, 00:22:00.366 "num_base_bdevs_discovered": 3, 00:22:00.366 "num_base_bdevs_operational": 3, 00:22:00.366 "base_bdevs_list": [ 00:22:00.366 { 00:22:00.366 "name": null, 00:22:00.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.366 "is_configured": false, 00:22:00.366 "data_offset": 0, 00:22:00.366 "data_size": 65536 00:22:00.366 }, 00:22:00.366 { 00:22:00.366 "name": "BaseBdev2", 00:22:00.366 "uuid": "2d6c2544-f3c0-4ecc-9653-789c0ce510c8", 00:22:00.366 "is_configured": true, 00:22:00.366 "data_offset": 0, 00:22:00.366 "data_size": 65536 00:22:00.366 }, 00:22:00.366 { 00:22:00.366 "name": "BaseBdev3", 00:22:00.366 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:00.366 "is_configured": true, 00:22:00.366 "data_offset": 0, 00:22:00.366 "data_size": 65536 00:22:00.366 }, 00:22:00.366 { 00:22:00.366 "name": "BaseBdev4", 00:22:00.366 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:00.366 "is_configured": true, 00:22:00.366 "data_offset": 0, 00:22:00.366 "data_size": 65536 00:22:00.366 } 00:22:00.366 ] 00:22:00.366 }' 00:22:00.366 05:19:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:00.366 05:19:19 -- common/autotest_common.sh@10 -- # set +x 00:22:00.625 05:19:19 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:00.884 [2024-07-26 05:19:19.870555] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:00.884 [2024-07-26 05:19:19.870607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:00.884 05:19:19 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:00.884 [2024-07-26 05:19:19.912837] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:22:00.884 [2024-07-26 05:19:19.914786] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:01.142 [2024-07-26 05:19:20.030505] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:01.142 [2024-07-26 05:19:20.031079] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:01.142 [2024-07-26 05:19:20.251999] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:01.142 [2024-07-26 05:19:20.252293] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:01.401 [2024-07-26 05:19:20.495286] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:01.401 [2024-07-26 05:19:20.495663] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:01.660 [2024-07-26 05:19:20.633976] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:01.920 05:19:20 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:01.920 05:19:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:01.920 05:19:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:01.920 05:19:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:01.920 05:19:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:01.920 05:19:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.920 05:19:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.920 [2024-07-26 05:19:20.990118] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:02.179 05:19:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:02.179 "name": "raid_bdev1", 00:22:02.179 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:02.179 "strip_size_kb": 0, 00:22:02.179 "state": "online", 00:22:02.179 "raid_level": "raid1", 00:22:02.179 "superblock": false, 00:22:02.179 "num_base_bdevs": 4, 00:22:02.179 "num_base_bdevs_discovered": 4, 00:22:02.179 "num_base_bdevs_operational": 4, 00:22:02.179 "process": { 00:22:02.179 "type": "rebuild", 00:22:02.179 "target": "spare", 00:22:02.179 "progress": { 00:22:02.179 "blocks": 18432, 00:22:02.179 "percent": 28 00:22:02.179 } 00:22:02.179 }, 00:22:02.179 "base_bdevs_list": [ 00:22:02.179 { 00:22:02.179 "name": "spare", 00:22:02.179 "uuid": "a56ca438-9ea1-5fa1-bdb5-292a864bd54a", 00:22:02.179 "is_configured": true, 00:22:02.179 "data_offset": 0, 00:22:02.179 "data_size": 65536 00:22:02.179 }, 00:22:02.179 { 00:22:02.179 "name": "BaseBdev2", 00:22:02.179 "uuid": "2d6c2544-f3c0-4ecc-9653-789c0ce510c8", 00:22:02.179 "is_configured": true, 00:22:02.179 "data_offset": 0, 00:22:02.179 "data_size": 65536 00:22:02.179 }, 00:22:02.179 { 00:22:02.179 "name": "BaseBdev3", 00:22:02.179 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:02.179 "is_configured": true, 00:22:02.179 "data_offset": 0, 00:22:02.179 "data_size": 65536 00:22:02.179 }, 00:22:02.179 { 00:22:02.179 "name": "BaseBdev4", 00:22:02.179 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:02.179 "is_configured": true, 00:22:02.179 "data_offset": 0, 00:22:02.179 "data_size": 65536 00:22:02.179 } 00:22:02.179 ] 00:22:02.179 }' 00:22:02.179 05:19:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:02.179 05:19:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.179 05:19:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:02.179 05:19:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.179 05:19:21 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:02.439 [2024-07-26 05:19:21.337034] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:02.439 [2024-07-26 05:19:21.547959] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:02.698 [2024-07-26 05:19:21.566177] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.698 [2024-07-26 05:19:21.598727] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.698 05:19:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.957 05:19:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:02.957 "name": "raid_bdev1", 00:22:02.957 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:02.957 "strip_size_kb": 0, 00:22:02.957 "state": "online", 00:22:02.957 "raid_level": "raid1", 00:22:02.957 "superblock": false, 00:22:02.957 "num_base_bdevs": 4, 00:22:02.957 "num_base_bdevs_discovered": 3, 00:22:02.957 "num_base_bdevs_operational": 3, 00:22:02.957 "base_bdevs_list": [ 00:22:02.957 { 00:22:02.957 "name": null, 00:22:02.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.957 "is_configured": false, 00:22:02.957 "data_offset": 0, 00:22:02.957 "data_size": 65536 00:22:02.957 }, 00:22:02.957 { 00:22:02.957 "name": "BaseBdev2", 00:22:02.957 "uuid": "2d6c2544-f3c0-4ecc-9653-789c0ce510c8", 00:22:02.957 "is_configured": true, 00:22:02.957 "data_offset": 0, 00:22:02.957 "data_size": 65536 00:22:02.957 }, 00:22:02.957 { 00:22:02.957 "name": "BaseBdev3", 00:22:02.957 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:02.957 "is_configured": true, 00:22:02.957 "data_offset": 0, 00:22:02.957 "data_size": 65536 00:22:02.957 }, 00:22:02.957 { 00:22:02.957 "name": "BaseBdev4", 00:22:02.957 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:02.957 "is_configured": true, 00:22:02.957 "data_offset": 0, 00:22:02.957 "data_size": 65536 00:22:02.957 } 00:22:02.957 ] 00:22:02.957 }' 00:22:02.957 05:19:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:02.957 05:19:21 -- common/autotest_common.sh@10 -- # set +x 00:22:03.216 05:19:22 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:03.216 05:19:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:03.216 05:19:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:03.216 05:19:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:03.216 05:19:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:03.216 05:19:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.216 05:19:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.474 05:19:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:03.474 "name": "raid_bdev1", 00:22:03.474 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:03.474 "strip_size_kb": 0, 00:22:03.474 "state": "online", 00:22:03.474 "raid_level": "raid1", 00:22:03.474 "superblock": false, 00:22:03.474 "num_base_bdevs": 4, 00:22:03.474 "num_base_bdevs_discovered": 3, 00:22:03.474 "num_base_bdevs_operational": 3, 00:22:03.474 "base_bdevs_list": [ 00:22:03.474 { 00:22:03.474 "name": null, 00:22:03.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.474 "is_configured": false, 00:22:03.474 "data_offset": 0, 00:22:03.474 "data_size": 65536 00:22:03.474 }, 00:22:03.474 { 00:22:03.474 "name": "BaseBdev2", 00:22:03.474 "uuid": "2d6c2544-f3c0-4ecc-9653-789c0ce510c8", 00:22:03.474 "is_configured": true, 00:22:03.474 "data_offset": 0, 00:22:03.474 "data_size": 65536 00:22:03.474 }, 00:22:03.474 { 00:22:03.474 "name": "BaseBdev3", 00:22:03.474 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:03.474 "is_configured": true, 00:22:03.474 "data_offset": 0, 00:22:03.474 "data_size": 65536 00:22:03.474 }, 00:22:03.474 { 00:22:03.474 "name": "BaseBdev4", 00:22:03.474 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:03.474 "is_configured": true, 00:22:03.474 "data_offset": 0, 00:22:03.474 "data_size": 65536 00:22:03.474 } 00:22:03.474 ] 00:22:03.474 }' 00:22:03.474 05:19:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:03.474 05:19:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:03.474 05:19:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:03.474 05:19:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:03.474 05:19:22 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:03.733 [2024-07-26 05:19:22.693411] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:03.733 [2024-07-26 05:19:22.693458] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:03.733 05:19:22 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:03.733 [2024-07-26 05:19:22.750490] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:22:03.733 [2024-07-26 05:19:22.752815] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:03.993 [2024-07-26 05:19:22.888195] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:04.252 [2024-07-26 05:19:23.112592] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:04.252 [2024-07-26 05:19:23.113549] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:04.510 [2024-07-26 05:19:23.455313] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:04.768 [2024-07-26 05:19:23.674521] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:04.768 05:19:23 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.768 05:19:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:04.768 05:19:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:04.768 05:19:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:04.768 05:19:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:04.768 05:19:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.768 05:19:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.027 [2024-07-26 05:19:23.917267] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:05.027 05:19:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:05.027 "name": "raid_bdev1", 00:22:05.027 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:05.027 "strip_size_kb": 0, 00:22:05.027 "state": "online", 00:22:05.027 "raid_level": "raid1", 00:22:05.027 "superblock": false, 00:22:05.027 "num_base_bdevs": 4, 00:22:05.027 "num_base_bdevs_discovered": 4, 00:22:05.027 "num_base_bdevs_operational": 4, 00:22:05.027 "process": { 00:22:05.027 "type": "rebuild", 00:22:05.027 "target": "spare", 00:22:05.027 "progress": { 00:22:05.027 "blocks": 14336, 00:22:05.027 "percent": 21 00:22:05.027 } 00:22:05.027 }, 00:22:05.027 "base_bdevs_list": [ 00:22:05.027 { 00:22:05.027 "name": "spare", 00:22:05.027 "uuid": "a56ca438-9ea1-5fa1-bdb5-292a864bd54a", 00:22:05.027 "is_configured": true, 00:22:05.027 "data_offset": 0, 00:22:05.027 "data_size": 65536 00:22:05.027 }, 00:22:05.027 { 00:22:05.027 "name": "BaseBdev2", 00:22:05.027 "uuid": "2d6c2544-f3c0-4ecc-9653-789c0ce510c8", 00:22:05.027 "is_configured": true, 00:22:05.027 "data_offset": 0, 00:22:05.027 "data_size": 65536 00:22:05.027 }, 00:22:05.027 { 00:22:05.027 "name": "BaseBdev3", 00:22:05.027 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:05.027 "is_configured": true, 00:22:05.027 "data_offset": 0, 00:22:05.027 "data_size": 65536 00:22:05.027 }, 00:22:05.027 { 00:22:05.027 "name": "BaseBdev4", 00:22:05.027 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:05.027 "is_configured": true, 00:22:05.027 "data_offset": 0, 00:22:05.027 "data_size": 65536 00:22:05.027 } 00:22:05.027 ] 00:22:05.027 }' 00:22:05.027 05:19:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:05.027 05:19:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.027 05:19:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:05.027 05:19:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.027 05:19:24 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:05.027 05:19:24 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:05.027 05:19:24 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:05.027 05:19:24 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:05.027 05:19:24 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:05.027 [2024-07-26 05:19:24.026829] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:05.286 [2024-07-26 05:19:24.225442] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:05.286 [2024-07-26 05:19:24.323173] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005930 00:22:05.286 [2024-07-26 05:19:24.323204] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ad0 00:22:05.286 [2024-07-26 05:19:24.332334] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:05.286 05:19:24 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:05.286 05:19:24 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:05.286 05:19:24 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:05.286 05:19:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:05.286 05:19:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:05.286 05:19:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:05.286 05:19:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:05.286 05:19:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.286 05:19:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.582 [2024-07-26 05:19:24.439995] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:05.582 [2024-07-26 05:19:24.440593] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:05.582 "name": "raid_bdev1", 00:22:05.582 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:05.582 "strip_size_kb": 0, 00:22:05.582 "state": "online", 00:22:05.582 "raid_level": "raid1", 00:22:05.582 "superblock": false, 00:22:05.582 "num_base_bdevs": 4, 00:22:05.582 "num_base_bdevs_discovered": 3, 00:22:05.582 "num_base_bdevs_operational": 3, 00:22:05.582 "process": { 00:22:05.582 "type": "rebuild", 00:22:05.582 "target": "spare", 00:22:05.582 "progress": { 00:22:05.582 "blocks": 22528, 00:22:05.582 "percent": 34 00:22:05.582 } 00:22:05.582 }, 00:22:05.582 "base_bdevs_list": [ 00:22:05.582 { 00:22:05.582 "name": "spare", 00:22:05.582 "uuid": "a56ca438-9ea1-5fa1-bdb5-292a864bd54a", 00:22:05.582 "is_configured": true, 00:22:05.582 "data_offset": 0, 00:22:05.582 "data_size": 65536 00:22:05.582 }, 00:22:05.582 { 00:22:05.582 "name": null, 00:22:05.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.582 "is_configured": false, 00:22:05.582 "data_offset": 0, 00:22:05.582 "data_size": 65536 00:22:05.582 }, 00:22:05.582 { 00:22:05.582 "name": "BaseBdev3", 00:22:05.582 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:05.582 "is_configured": true, 00:22:05.582 "data_offset": 0, 00:22:05.582 "data_size": 65536 00:22:05.582 }, 00:22:05.582 { 00:22:05.582 "name": "BaseBdev4", 00:22:05.582 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:05.582 "is_configured": true, 00:22:05.582 "data_offset": 0, 00:22:05.582 "data_size": 65536 00:22:05.582 } 00:22:05.582 ] 00:22:05.582 }' 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@657 -- # local timeout=471 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.582 05:19:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.841 05:19:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:05.841 "name": "raid_bdev1", 00:22:05.841 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:05.841 "strip_size_kb": 0, 00:22:05.841 "state": "online", 00:22:05.841 "raid_level": "raid1", 00:22:05.841 "superblock": false, 00:22:05.841 "num_base_bdevs": 4, 00:22:05.841 "num_base_bdevs_discovered": 3, 00:22:05.841 "num_base_bdevs_operational": 3, 00:22:05.841 "process": { 00:22:05.841 "type": "rebuild", 00:22:05.841 "target": "spare", 00:22:05.841 "progress": { 00:22:05.841 "blocks": 24576, 00:22:05.841 "percent": 37 00:22:05.841 } 00:22:05.841 }, 00:22:05.841 "base_bdevs_list": [ 00:22:05.841 { 00:22:05.841 "name": "spare", 00:22:05.841 "uuid": "a56ca438-9ea1-5fa1-bdb5-292a864bd54a", 00:22:05.841 "is_configured": true, 00:22:05.841 "data_offset": 0, 00:22:05.841 "data_size": 65536 00:22:05.841 }, 00:22:05.841 { 00:22:05.841 "name": null, 00:22:05.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.841 "is_configured": false, 00:22:05.841 "data_offset": 0, 00:22:05.841 "data_size": 65536 00:22:05.841 }, 00:22:05.841 { 00:22:05.841 "name": "BaseBdev3", 00:22:05.841 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:05.841 "is_configured": true, 00:22:05.841 "data_offset": 0, 00:22:05.841 "data_size": 65536 00:22:05.841 }, 00:22:05.842 { 00:22:05.842 "name": "BaseBdev4", 00:22:05.842 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:05.842 "is_configured": true, 00:22:05.842 "data_offset": 0, 00:22:05.842 "data_size": 65536 00:22:05.842 } 00:22:05.842 ] 00:22:05.842 }' 00:22:05.842 05:19:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:05.842 05:19:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.842 05:19:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:05.842 05:19:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.842 05:19:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:05.842 [2024-07-26 05:19:24.769133] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:05.842 [2024-07-26 05:19:24.769549] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:05.842 [2024-07-26 05:19:24.877134] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:05.842 [2024-07-26 05:19:24.877273] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:06.101 [2024-07-26 05:19:25.194154] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:06.360 [2024-07-26 05:19:25.310141] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:06.618 [2024-07-26 05:19:25.626842] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:06.877 05:19:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:06.877 05:19:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:06.877 05:19:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:06.877 05:19:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:06.877 05:19:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:06.877 05:19:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:06.877 05:19:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.877 05:19:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.877 05:19:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:06.877 "name": "raid_bdev1", 00:22:06.877 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:06.877 "strip_size_kb": 0, 00:22:06.877 "state": "online", 00:22:06.877 "raid_level": "raid1", 00:22:06.877 "superblock": false, 00:22:06.877 "num_base_bdevs": 4, 00:22:06.877 "num_base_bdevs_discovered": 3, 00:22:06.877 "num_base_bdevs_operational": 3, 00:22:06.877 "process": { 00:22:06.877 "type": "rebuild", 00:22:06.877 "target": "spare", 00:22:06.877 "progress": { 00:22:06.877 "blocks": 40960, 00:22:06.877 "percent": 62 00:22:06.877 } 00:22:06.877 }, 00:22:06.877 "base_bdevs_list": [ 00:22:06.877 { 00:22:06.877 "name": "spare", 00:22:06.877 "uuid": "a56ca438-9ea1-5fa1-bdb5-292a864bd54a", 00:22:06.877 "is_configured": true, 00:22:06.878 "data_offset": 0, 00:22:06.878 "data_size": 65536 00:22:06.878 }, 00:22:06.878 { 00:22:06.878 "name": null, 00:22:06.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.878 "is_configured": false, 00:22:06.878 "data_offset": 0, 00:22:06.878 "data_size": 65536 00:22:06.878 }, 00:22:06.878 { 00:22:06.878 "name": "BaseBdev3", 00:22:06.878 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:06.878 "is_configured": true, 00:22:06.878 "data_offset": 0, 00:22:06.878 "data_size": 65536 00:22:06.878 }, 00:22:06.878 { 00:22:06.878 "name": "BaseBdev4", 00:22:06.878 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:06.878 "is_configured": true, 00:22:06.878 "data_offset": 0, 00:22:06.878 "data_size": 65536 00:22:06.878 } 00:22:06.878 ] 00:22:06.878 }' 00:22:06.878 05:19:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:07.136 05:19:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:07.136 05:19:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:07.136 05:19:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:07.137 05:19:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:07.137 [2024-07-26 05:19:26.211644] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:07.137 [2024-07-26 05:19:26.211857] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:08.072 [2024-07-26 05:19:26.855637] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:22:08.072 [2024-07-26 05:19:26.856023] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:22:08.072 05:19:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:08.072 05:19:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:08.072 05:19:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:08.072 05:19:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:08.072 05:19:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:08.072 05:19:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:08.072 05:19:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.072 05:19:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.072 [2024-07-26 05:19:27.071652] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:08.332 05:19:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:08.332 "name": "raid_bdev1", 00:22:08.332 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:08.332 "strip_size_kb": 0, 00:22:08.332 "state": "online", 00:22:08.332 "raid_level": "raid1", 00:22:08.332 "superblock": false, 00:22:08.332 "num_base_bdevs": 4, 00:22:08.332 "num_base_bdevs_discovered": 3, 00:22:08.332 "num_base_bdevs_operational": 3, 00:22:08.332 "process": { 00:22:08.332 "type": "rebuild", 00:22:08.332 "target": "spare", 00:22:08.332 "progress": { 00:22:08.332 "blocks": 59392, 00:22:08.332 "percent": 90 00:22:08.332 } 00:22:08.332 }, 00:22:08.332 "base_bdevs_list": [ 00:22:08.332 { 00:22:08.332 "name": "spare", 00:22:08.332 "uuid": "a56ca438-9ea1-5fa1-bdb5-292a864bd54a", 00:22:08.332 "is_configured": true, 00:22:08.332 "data_offset": 0, 00:22:08.332 "data_size": 65536 00:22:08.332 }, 00:22:08.332 { 00:22:08.332 "name": null, 00:22:08.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.332 "is_configured": false, 00:22:08.332 "data_offset": 0, 00:22:08.332 "data_size": 65536 00:22:08.332 }, 00:22:08.332 { 00:22:08.332 "name": "BaseBdev3", 00:22:08.332 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:08.332 "is_configured": true, 00:22:08.332 "data_offset": 0, 00:22:08.332 "data_size": 65536 00:22:08.332 }, 00:22:08.332 { 00:22:08.332 "name": "BaseBdev4", 00:22:08.332 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:08.332 "is_configured": true, 00:22:08.332 "data_offset": 0, 00:22:08.332 "data_size": 65536 00:22:08.332 } 00:22:08.332 ] 00:22:08.332 }' 00:22:08.332 05:19:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:08.332 05:19:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:08.332 05:19:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:08.332 05:19:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.332 05:19:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:08.591 [2024-07-26 05:19:27.512350] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:08.591 [2024-07-26 05:19:27.618148] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:08.591 [2024-07-26 05:19:27.620936] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.211 05:19:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:09.211 05:19:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:09.211 05:19:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:09.211 05:19:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:09.211 05:19:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:09.211 05:19:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:09.211 05:19:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.211 05:19:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:09.470 "name": "raid_bdev1", 00:22:09.470 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:09.470 "strip_size_kb": 0, 00:22:09.470 "state": "online", 00:22:09.470 "raid_level": "raid1", 00:22:09.470 "superblock": false, 00:22:09.470 "num_base_bdevs": 4, 00:22:09.470 "num_base_bdevs_discovered": 3, 00:22:09.470 "num_base_bdevs_operational": 3, 00:22:09.470 "base_bdevs_list": [ 00:22:09.470 { 00:22:09.470 "name": "spare", 00:22:09.470 "uuid": "a56ca438-9ea1-5fa1-bdb5-292a864bd54a", 00:22:09.470 "is_configured": true, 00:22:09.470 "data_offset": 0, 00:22:09.470 "data_size": 65536 00:22:09.470 }, 00:22:09.470 { 00:22:09.470 "name": null, 00:22:09.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.470 "is_configured": false, 00:22:09.470 "data_offset": 0, 00:22:09.470 "data_size": 65536 00:22:09.470 }, 00:22:09.470 { 00:22:09.470 "name": "BaseBdev3", 00:22:09.470 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:09.470 "is_configured": true, 00:22:09.470 "data_offset": 0, 00:22:09.470 "data_size": 65536 00:22:09.470 }, 00:22:09.470 { 00:22:09.470 "name": "BaseBdev4", 00:22:09.470 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:09.470 "is_configured": true, 00:22:09.470 "data_offset": 0, 00:22:09.470 "data_size": 65536 00:22:09.470 } 00:22:09.470 ] 00:22:09.470 }' 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@660 -- # break 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.470 05:19:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:09.729 "name": "raid_bdev1", 00:22:09.729 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:09.729 "strip_size_kb": 0, 00:22:09.729 "state": "online", 00:22:09.729 "raid_level": "raid1", 00:22:09.729 "superblock": false, 00:22:09.729 "num_base_bdevs": 4, 00:22:09.729 "num_base_bdevs_discovered": 3, 00:22:09.729 "num_base_bdevs_operational": 3, 00:22:09.729 "base_bdevs_list": [ 00:22:09.729 { 00:22:09.729 "name": "spare", 00:22:09.729 "uuid": "a56ca438-9ea1-5fa1-bdb5-292a864bd54a", 00:22:09.729 "is_configured": true, 00:22:09.729 "data_offset": 0, 00:22:09.729 "data_size": 65536 00:22:09.729 }, 00:22:09.729 { 00:22:09.729 "name": null, 00:22:09.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.729 "is_configured": false, 00:22:09.729 "data_offset": 0, 00:22:09.729 "data_size": 65536 00:22:09.729 }, 00:22:09.729 { 00:22:09.729 "name": "BaseBdev3", 00:22:09.729 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:09.729 "is_configured": true, 00:22:09.729 "data_offset": 0, 00:22:09.729 "data_size": 65536 00:22:09.729 }, 00:22:09.729 { 00:22:09.729 "name": "BaseBdev4", 00:22:09.729 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:09.729 "is_configured": true, 00:22:09.729 "data_offset": 0, 00:22:09.729 "data_size": 65536 00:22:09.729 } 00:22:09.729 ] 00:22:09.729 }' 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.729 05:19:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.989 05:19:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:09.989 "name": "raid_bdev1", 00:22:09.989 "uuid": "3d7c01e6-fc4b-47d1-a22b-f524eb5afb6e", 00:22:09.989 "strip_size_kb": 0, 00:22:09.989 "state": "online", 00:22:09.989 "raid_level": "raid1", 00:22:09.989 "superblock": false, 00:22:09.989 "num_base_bdevs": 4, 00:22:09.989 "num_base_bdevs_discovered": 3, 00:22:09.989 "num_base_bdevs_operational": 3, 00:22:09.989 "base_bdevs_list": [ 00:22:09.989 { 00:22:09.989 "name": "spare", 00:22:09.989 "uuid": "a56ca438-9ea1-5fa1-bdb5-292a864bd54a", 00:22:09.989 "is_configured": true, 00:22:09.989 "data_offset": 0, 00:22:09.989 "data_size": 65536 00:22:09.989 }, 00:22:09.989 { 00:22:09.989 "name": null, 00:22:09.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.989 "is_configured": false, 00:22:09.989 "data_offset": 0, 00:22:09.989 "data_size": 65536 00:22:09.989 }, 00:22:09.989 { 00:22:09.989 "name": "BaseBdev3", 00:22:09.989 "uuid": "90bd1adc-07a5-421f-a225-74faafc04267", 00:22:09.989 "is_configured": true, 00:22:09.989 "data_offset": 0, 00:22:09.989 "data_size": 65536 00:22:09.989 }, 00:22:09.989 { 00:22:09.989 "name": "BaseBdev4", 00:22:09.989 "uuid": "085d6020-59fb-454e-aadc-e430dcee6bd6", 00:22:09.989 "is_configured": true, 00:22:09.989 "data_offset": 0, 00:22:09.989 "data_size": 65536 00:22:09.989 } 00:22:09.989 ] 00:22:09.989 }' 00:22:09.989 05:19:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:09.989 05:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:10.558 05:19:29 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:10.558 [2024-07-26 05:19:29.536652] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:10.558 [2024-07-26 05:19:29.536686] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:10.558 00:22:10.558 Latency(us) 00:22:10.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.558 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:10.558 raid_bdev1 : 10.48 100.17 300.50 0.00 0.00 13698.87 264.38 112483.61 00:22:10.558 =================================================================================================================== 00:22:10.558 Total : 100.17 300.50 0.00 0.00 13698.87 264.38 112483.61 00:22:10.558 0 00:22:10.558 [2024-07-26 05:19:29.579772] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.558 [2024-07-26 05:19:29.579813] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:10.558 [2024-07-26 05:19:29.579900] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:10.558 [2024-07-26 05:19:29.579915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:22:10.558 05:19:29 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.558 05:19:29 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:10.817 05:19:29 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:10.817 05:19:29 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:10.817 05:19:29 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:10.817 05:19:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:10.817 05:19:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:10.817 05:19:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:10.817 05:19:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:10.817 05:19:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:10.817 05:19:29 -- bdev/nbd_common.sh@12 -- # local i 00:22:10.817 05:19:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:10.817 05:19:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:10.817 05:19:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:11.076 /dev/nbd0 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:11.076 05:19:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:11.076 05:19:30 -- common/autotest_common.sh@857 -- # local i 00:22:11.076 05:19:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:11.076 05:19:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:11.076 05:19:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:11.076 05:19:30 -- common/autotest_common.sh@861 -- # break 00:22:11.076 05:19:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:11.076 05:19:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:11.076 05:19:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:11.076 1+0 records in 00:22:11.076 1+0 records out 00:22:11.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177989 s, 23.0 MB/s 00:22:11.076 05:19:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.076 05:19:30 -- common/autotest_common.sh@874 -- # size=4096 00:22:11.076 05:19:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.076 05:19:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:11.076 05:19:30 -- common/autotest_common.sh@877 -- # return 0 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:11.076 05:19:30 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:11.076 05:19:30 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:11.076 05:19:30 -- bdev/bdev_raid.sh@678 -- # continue 00:22:11.076 05:19:30 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:11.076 05:19:30 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:11.076 05:19:30 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@12 -- # local i 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:11.076 05:19:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:11.335 /dev/nbd1 00:22:11.335 05:19:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:11.335 05:19:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:11.335 05:19:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:11.335 05:19:30 -- common/autotest_common.sh@857 -- # local i 00:22:11.335 05:19:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:11.335 05:19:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:11.335 05:19:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:11.335 05:19:30 -- common/autotest_common.sh@861 -- # break 00:22:11.335 05:19:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:11.335 05:19:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:11.335 05:19:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:11.335 1+0 records in 00:22:11.335 1+0 records out 00:22:11.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482597 s, 8.5 MB/s 00:22:11.335 05:19:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.335 05:19:30 -- common/autotest_common.sh@874 -- # size=4096 00:22:11.335 05:19:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.335 05:19:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:11.335 05:19:30 -- common/autotest_common.sh@877 -- # return 0 00:22:11.335 05:19:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:11.335 05:19:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:11.335 05:19:30 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:11.335 05:19:30 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:11.335 05:19:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:11.335 05:19:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:11.335 05:19:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:11.335 05:19:30 -- bdev/nbd_common.sh@51 -- # local i 00:22:11.335 05:19:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:11.335 05:19:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@41 -- # break 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@45 -- # return 0 00:22:11.594 05:19:30 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:11.594 05:19:30 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:11.594 05:19:30 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@12 -- # local i 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:11.594 05:19:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:11.854 /dev/nbd1 00:22:11.854 05:19:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:11.854 05:19:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:11.854 05:19:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:11.854 05:19:30 -- common/autotest_common.sh@857 -- # local i 00:22:11.854 05:19:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:11.854 05:19:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:11.854 05:19:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:11.854 05:19:30 -- common/autotest_common.sh@861 -- # break 00:22:11.854 05:19:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:11.854 05:19:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:11.854 05:19:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:11.854 1+0 records in 00:22:11.854 1+0 records out 00:22:11.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325249 s, 12.6 MB/s 00:22:11.854 05:19:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.854 05:19:30 -- common/autotest_common.sh@874 -- # size=4096 00:22:11.854 05:19:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.854 05:19:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:11.854 05:19:30 -- common/autotest_common.sh@877 -- # return 0 00:22:11.854 05:19:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:11.854 05:19:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:11.854 05:19:30 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:11.854 05:19:30 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:11.854 05:19:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:11.854 05:19:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:11.854 05:19:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:11.854 05:19:30 -- bdev/nbd_common.sh@51 -- # local i 00:22:11.854 05:19:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:11.854 05:19:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@41 -- # break 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@45 -- # return 0 00:22:12.113 05:19:31 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@51 -- # local i 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:12.113 05:19:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:12.372 05:19:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:12.372 05:19:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:12.372 05:19:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:12.372 05:19:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:12.372 05:19:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:12.372 05:19:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:12.372 05:19:31 -- bdev/nbd_common.sh@41 -- # break 00:22:12.372 05:19:31 -- bdev/nbd_common.sh@45 -- # return 0 00:22:12.372 05:19:31 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:12.372 05:19:31 -- bdev/bdev_raid.sh@709 -- # killprocess 81217 00:22:12.372 05:19:31 -- common/autotest_common.sh@926 -- # '[' -z 81217 ']' 00:22:12.372 05:19:31 -- common/autotest_common.sh@930 -- # kill -0 81217 00:22:12.372 05:19:31 -- common/autotest_common.sh@931 -- # uname 00:22:12.372 05:19:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:12.372 05:19:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81217 00:22:12.372 05:19:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:12.372 05:19:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:12.372 killing process with pid 81217 00:22:12.372 Received shutdown signal, test time was about 12.346020 seconds 00:22:12.372 00:22:12.372 Latency(us) 00:22:12.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.372 =================================================================================================================== 00:22:12.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.372 05:19:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81217' 00:22:12.372 05:19:31 -- common/autotest_common.sh@945 -- # kill 81217 00:22:12.372 [2024-07-26 05:19:31.428002] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:12.372 05:19:31 -- common/autotest_common.sh@950 -- # wait 81217 00:22:12.631 [2024-07-26 05:19:31.698978] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:13.568 05:19:32 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:13.568 00:22:13.568 real 0m17.304s 00:22:13.568 user 0m24.823s 00:22:13.568 sys 0m2.233s 00:22:13.568 ************************************ 00:22:13.568 END TEST raid_rebuild_test_io 00:22:13.568 ************************************ 00:22:13.568 05:19:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.568 05:19:32 -- common/autotest_common.sh@10 -- # set +x 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:22:13.827 05:19:32 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:13.827 05:19:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:13.827 05:19:32 -- common/autotest_common.sh@10 -- # set +x 00:22:13.827 ************************************ 00:22:13.827 START TEST raid_rebuild_test_sb_io 00:22:13.827 ************************************ 00:22:13.827 05:19:32 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:13.827 05:19:32 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:13.828 05:19:32 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:13.828 05:19:32 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:13.828 05:19:32 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:13.828 05:19:32 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:13.828 05:19:32 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:13.828 05:19:32 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:13.828 05:19:32 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:13.828 05:19:32 -- bdev/bdev_raid.sh@544 -- # raid_pid=81698 00:22:13.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:13.828 05:19:32 -- bdev/bdev_raid.sh@545 -- # waitforlisten 81698 /var/tmp/spdk-raid.sock 00:22:13.828 05:19:32 -- common/autotest_common.sh@819 -- # '[' -z 81698 ']' 00:22:13.828 05:19:32 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:13.828 05:19:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:13.828 05:19:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:13.828 05:19:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:13.828 05:19:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:13.828 05:19:32 -- common/autotest_common.sh@10 -- # set +x 00:22:13.828 [2024-07-26 05:19:32.766891] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:13.828 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:13.828 Zero copy mechanism will not be used. 00:22:13.828 [2024-07-26 05:19:32.767304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81698 ] 00:22:13.828 [2024-07-26 05:19:32.912707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.086 [2024-07-26 05:19:33.064176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.345 [2024-07-26 05:19:33.206408] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.604 05:19:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:14.604 05:19:33 -- common/autotest_common.sh@852 -- # return 0 00:22:14.604 05:19:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:14.604 05:19:33 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:14.604 05:19:33 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:14.862 BaseBdev1_malloc 00:22:14.862 05:19:33 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:15.121 [2024-07-26 05:19:34.051880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:15.121 [2024-07-26 05:19:34.051950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.121 [2024-07-26 05:19:34.051982] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:22:15.121 [2024-07-26 05:19:34.051996] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.121 [2024-07-26 05:19:34.054164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.121 [2024-07-26 05:19:34.054355] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:15.121 BaseBdev1 00:22:15.121 05:19:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:15.121 05:19:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:15.121 05:19:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:15.380 BaseBdev2_malloc 00:22:15.380 05:19:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:15.639 [2024-07-26 05:19:34.522611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:15.639 [2024-07-26 05:19:34.522858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.639 [2024-07-26 05:19:34.522945] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:22:15.639 [2024-07-26 05:19:34.523230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.639 [2024-07-26 05:19:34.525396] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.639 [2024-07-26 05:19:34.525576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:15.639 BaseBdev2 00:22:15.639 05:19:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:15.639 05:19:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:15.639 05:19:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:15.639 BaseBdev3_malloc 00:22:15.898 05:19:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:15.898 [2024-07-26 05:19:34.920406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:15.898 [2024-07-26 05:19:34.920469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.898 [2024-07-26 05:19:34.920497] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:22:15.898 [2024-07-26 05:19:34.920511] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.898 [2024-07-26 05:19:34.922646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.898 [2024-07-26 05:19:34.922695] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:15.898 BaseBdev3 00:22:15.898 05:19:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:15.898 05:19:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:15.898 05:19:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:16.157 BaseBdev4_malloc 00:22:16.157 05:19:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:16.416 [2024-07-26 05:19:35.288781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:16.416 [2024-07-26 05:19:35.288842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.416 [2024-07-26 05:19:35.288871] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:22:16.416 [2024-07-26 05:19:35.288885] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.416 [2024-07-26 05:19:35.291141] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.416 [2024-07-26 05:19:35.291194] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:16.416 BaseBdev4 00:22:16.416 05:19:35 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:16.675 spare_malloc 00:22:16.675 05:19:35 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:16.675 spare_delay 00:22:16.675 05:19:35 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:16.935 [2024-07-26 05:19:35.905718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:16.935 [2024-07-26 05:19:35.905941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.935 [2024-07-26 05:19:35.906119] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:22:16.935 [2024-07-26 05:19:35.906238] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.935 [2024-07-26 05:19:35.908547] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.935 [2024-07-26 05:19:35.908715] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:16.935 spare 00:22:16.935 05:19:35 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:17.194 [2024-07-26 05:19:36.093791] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:17.194 [2024-07-26 05:19:36.095676] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:17.194 [2024-07-26 05:19:36.095883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:17.194 [2024-07-26 05:19:36.096083] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:17.194 [2024-07-26 05:19:36.096405] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:22:17.194 [2024-07-26 05:19:36.096555] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:17.194 [2024-07-26 05:19:36.096755] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:22:17.194 [2024-07-26 05:19:36.097261] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:22:17.194 [2024-07-26 05:19:36.097283] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:22:17.194 [2024-07-26 05:19:36.097461] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.194 05:19:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.452 05:19:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:17.452 "name": "raid_bdev1", 00:22:17.452 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:17.452 "strip_size_kb": 0, 00:22:17.452 "state": "online", 00:22:17.452 "raid_level": "raid1", 00:22:17.452 "superblock": true, 00:22:17.452 "num_base_bdevs": 4, 00:22:17.452 "num_base_bdevs_discovered": 4, 00:22:17.452 "num_base_bdevs_operational": 4, 00:22:17.452 "base_bdevs_list": [ 00:22:17.452 { 00:22:17.452 "name": "BaseBdev1", 00:22:17.452 "uuid": "4bc17d69-1627-580b-b003-822d78877be8", 00:22:17.452 "is_configured": true, 00:22:17.452 "data_offset": 2048, 00:22:17.452 "data_size": 63488 00:22:17.452 }, 00:22:17.452 { 00:22:17.452 "name": "BaseBdev2", 00:22:17.452 "uuid": "5d299be1-b2c3-57de-aa0e-52acaa40d20c", 00:22:17.452 "is_configured": true, 00:22:17.452 "data_offset": 2048, 00:22:17.452 "data_size": 63488 00:22:17.452 }, 00:22:17.452 { 00:22:17.452 "name": "BaseBdev3", 00:22:17.452 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:17.452 "is_configured": true, 00:22:17.452 "data_offset": 2048, 00:22:17.452 "data_size": 63488 00:22:17.452 }, 00:22:17.452 { 00:22:17.452 "name": "BaseBdev4", 00:22:17.452 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:17.452 "is_configured": true, 00:22:17.452 "data_offset": 2048, 00:22:17.452 "data_size": 63488 00:22:17.452 } 00:22:17.452 ] 00:22:17.452 }' 00:22:17.452 05:19:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:17.452 05:19:36 -- common/autotest_common.sh@10 -- # set +x 00:22:17.711 05:19:36 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:17.711 05:19:36 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:17.711 [2024-07-26 05:19:36.762106] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.711 05:19:36 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:17.711 05:19:36 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.711 05:19:36 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:17.970 05:19:36 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:17.970 05:19:36 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:17.970 05:19:36 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:17.970 05:19:36 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:17.970 [2024-07-26 05:19:37.064122] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:22:17.970 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:17.970 Zero copy mechanism will not be used. 00:22:17.970 Running I/O for 60 seconds... 00:22:18.229 [2024-07-26 05:19:37.150706] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:18.229 [2024-07-26 05:19:37.156937] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.229 05:19:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.488 05:19:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.488 "name": "raid_bdev1", 00:22:18.488 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:18.488 "strip_size_kb": 0, 00:22:18.488 "state": "online", 00:22:18.488 "raid_level": "raid1", 00:22:18.488 "superblock": true, 00:22:18.488 "num_base_bdevs": 4, 00:22:18.488 "num_base_bdevs_discovered": 3, 00:22:18.488 "num_base_bdevs_operational": 3, 00:22:18.488 "base_bdevs_list": [ 00:22:18.488 { 00:22:18.488 "name": null, 00:22:18.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.488 "is_configured": false, 00:22:18.488 "data_offset": 2048, 00:22:18.488 "data_size": 63488 00:22:18.488 }, 00:22:18.488 { 00:22:18.488 "name": "BaseBdev2", 00:22:18.488 "uuid": "5d299be1-b2c3-57de-aa0e-52acaa40d20c", 00:22:18.488 "is_configured": true, 00:22:18.488 "data_offset": 2048, 00:22:18.488 "data_size": 63488 00:22:18.488 }, 00:22:18.488 { 00:22:18.488 "name": "BaseBdev3", 00:22:18.488 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:18.488 "is_configured": true, 00:22:18.488 "data_offset": 2048, 00:22:18.488 "data_size": 63488 00:22:18.488 }, 00:22:18.488 { 00:22:18.488 "name": "BaseBdev4", 00:22:18.488 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:18.488 "is_configured": true, 00:22:18.488 "data_offset": 2048, 00:22:18.488 "data_size": 63488 00:22:18.488 } 00:22:18.488 ] 00:22:18.488 }' 00:22:18.488 05:19:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.488 05:19:37 -- common/autotest_common.sh@10 -- # set +x 00:22:18.747 05:19:37 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:19.006 [2024-07-26 05:19:37.948158] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:19.006 [2024-07-26 05:19:37.948221] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:19.006 05:19:37 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:19.006 [2024-07-26 05:19:38.005764] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:22:19.006 [2024-07-26 05:19:38.007870] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:19.264 [2024-07-26 05:19:38.124237] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:19.264 [2024-07-26 05:19:38.124866] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:19.264 [2024-07-26 05:19:38.269890] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:19.265 [2024-07-26 05:19:38.270235] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:19.523 [2024-07-26 05:19:38.512618] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:19.523 [2024-07-26 05:19:38.512982] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:19.523 [2024-07-26 05:19:38.628218] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:20.091 05:19:38 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.091 05:19:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:20.091 05:19:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:20.091 05:19:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:20.091 05:19:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:20.091 05:19:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.091 05:19:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.091 [2024-07-26 05:19:39.041992] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:20.350 05:19:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:20.350 "name": "raid_bdev1", 00:22:20.350 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:20.350 "strip_size_kb": 0, 00:22:20.350 "state": "online", 00:22:20.350 "raid_level": "raid1", 00:22:20.350 "superblock": true, 00:22:20.350 "num_base_bdevs": 4, 00:22:20.350 "num_base_bdevs_discovered": 4, 00:22:20.350 "num_base_bdevs_operational": 4, 00:22:20.350 "process": { 00:22:20.350 "type": "rebuild", 00:22:20.350 "target": "spare", 00:22:20.350 "progress": { 00:22:20.350 "blocks": 16384, 00:22:20.350 "percent": 25 00:22:20.350 } 00:22:20.350 }, 00:22:20.350 "base_bdevs_list": [ 00:22:20.350 { 00:22:20.350 "name": "spare", 00:22:20.350 "uuid": "864634a4-911d-592a-9076-96d1315f1490", 00:22:20.350 "is_configured": true, 00:22:20.350 "data_offset": 2048, 00:22:20.350 "data_size": 63488 00:22:20.350 }, 00:22:20.350 { 00:22:20.350 "name": "BaseBdev2", 00:22:20.350 "uuid": "5d299be1-b2c3-57de-aa0e-52acaa40d20c", 00:22:20.350 "is_configured": true, 00:22:20.350 "data_offset": 2048, 00:22:20.350 "data_size": 63488 00:22:20.350 }, 00:22:20.350 { 00:22:20.350 "name": "BaseBdev3", 00:22:20.350 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:20.350 "is_configured": true, 00:22:20.350 "data_offset": 2048, 00:22:20.350 "data_size": 63488 00:22:20.350 }, 00:22:20.350 { 00:22:20.350 "name": "BaseBdev4", 00:22:20.350 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:20.350 "is_configured": true, 00:22:20.350 "data_offset": 2048, 00:22:20.350 "data_size": 63488 00:22:20.350 } 00:22:20.350 ] 00:22:20.350 }' 00:22:20.350 05:19:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:20.350 05:19:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.350 05:19:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:20.350 05:19:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.350 05:19:39 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:20.609 [2024-07-26 05:19:39.495030] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:20.609 [2024-07-26 05:19:39.647553] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:20.609 [2024-07-26 05:19:39.657913] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.609 [2024-07-26 05:19:39.682304] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:20.868 "name": "raid_bdev1", 00:22:20.868 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:20.868 "strip_size_kb": 0, 00:22:20.868 "state": "online", 00:22:20.868 "raid_level": "raid1", 00:22:20.868 "superblock": true, 00:22:20.868 "num_base_bdevs": 4, 00:22:20.868 "num_base_bdevs_discovered": 3, 00:22:20.868 "num_base_bdevs_operational": 3, 00:22:20.868 "base_bdevs_list": [ 00:22:20.868 { 00:22:20.868 "name": null, 00:22:20.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.868 "is_configured": false, 00:22:20.868 "data_offset": 2048, 00:22:20.868 "data_size": 63488 00:22:20.868 }, 00:22:20.868 { 00:22:20.868 "name": "BaseBdev2", 00:22:20.868 "uuid": "5d299be1-b2c3-57de-aa0e-52acaa40d20c", 00:22:20.868 "is_configured": true, 00:22:20.868 "data_offset": 2048, 00:22:20.868 "data_size": 63488 00:22:20.868 }, 00:22:20.868 { 00:22:20.868 "name": "BaseBdev3", 00:22:20.868 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:20.868 "is_configured": true, 00:22:20.868 "data_offset": 2048, 00:22:20.868 "data_size": 63488 00:22:20.868 }, 00:22:20.868 { 00:22:20.868 "name": "BaseBdev4", 00:22:20.868 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:20.868 "is_configured": true, 00:22:20.868 "data_offset": 2048, 00:22:20.868 "data_size": 63488 00:22:20.868 } 00:22:20.868 ] 00:22:20.868 }' 00:22:20.868 05:19:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:20.868 05:19:39 -- common/autotest_common.sh@10 -- # set +x 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:21.437 "name": "raid_bdev1", 00:22:21.437 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:21.437 "strip_size_kb": 0, 00:22:21.437 "state": "online", 00:22:21.437 "raid_level": "raid1", 00:22:21.437 "superblock": true, 00:22:21.437 "num_base_bdevs": 4, 00:22:21.437 "num_base_bdevs_discovered": 3, 00:22:21.437 "num_base_bdevs_operational": 3, 00:22:21.437 "base_bdevs_list": [ 00:22:21.437 { 00:22:21.437 "name": null, 00:22:21.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.437 "is_configured": false, 00:22:21.437 "data_offset": 2048, 00:22:21.437 "data_size": 63488 00:22:21.437 }, 00:22:21.437 { 00:22:21.437 "name": "BaseBdev2", 00:22:21.437 "uuid": "5d299be1-b2c3-57de-aa0e-52acaa40d20c", 00:22:21.437 "is_configured": true, 00:22:21.437 "data_offset": 2048, 00:22:21.437 "data_size": 63488 00:22:21.437 }, 00:22:21.437 { 00:22:21.437 "name": "BaseBdev3", 00:22:21.437 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:21.437 "is_configured": true, 00:22:21.437 "data_offset": 2048, 00:22:21.437 "data_size": 63488 00:22:21.437 }, 00:22:21.437 { 00:22:21.437 "name": "BaseBdev4", 00:22:21.437 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:21.437 "is_configured": true, 00:22:21.437 "data_offset": 2048, 00:22:21.437 "data_size": 63488 00:22:21.437 } 00:22:21.437 ] 00:22:21.437 }' 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:21.437 05:19:40 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:21.696 [2024-07-26 05:19:40.752043] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:21.696 [2024-07-26 05:19:40.752134] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:21.696 [2024-07-26 05:19:40.787210] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:22:21.696 [2024-07-26 05:19:40.789294] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:21.696 05:19:40 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:21.966 [2024-07-26 05:19:40.905576] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:21.966 [2024-07-26 05:19:40.906805] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:22.251 [2024-07-26 05:19:41.121430] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:22.251 [2024-07-26 05:19:41.122112] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:22.509 [2024-07-26 05:19:41.488228] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:22.768 [2024-07-26 05:19:41.690554] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:22.768 [2024-07-26 05:19:41.691437] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:22.768 05:19:41 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:22.768 05:19:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:22.768 05:19:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:22.768 05:19:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:22.768 05:19:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:22.768 05:19:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.768 05:19:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.027 [2024-07-26 05:19:42.027040] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:23.027 [2024-07-26 05:19:42.027695] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:23.027 05:19:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:23.027 "name": "raid_bdev1", 00:22:23.027 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:23.027 "strip_size_kb": 0, 00:22:23.027 "state": "online", 00:22:23.027 "raid_level": "raid1", 00:22:23.027 "superblock": true, 00:22:23.027 "num_base_bdevs": 4, 00:22:23.027 "num_base_bdevs_discovered": 4, 00:22:23.027 "num_base_bdevs_operational": 4, 00:22:23.027 "process": { 00:22:23.027 "type": "rebuild", 00:22:23.027 "target": "spare", 00:22:23.027 "progress": { 00:22:23.027 "blocks": 12288, 00:22:23.027 "percent": 19 00:22:23.027 } 00:22:23.027 }, 00:22:23.027 "base_bdevs_list": [ 00:22:23.027 { 00:22:23.027 "name": "spare", 00:22:23.027 "uuid": "864634a4-911d-592a-9076-96d1315f1490", 00:22:23.027 "is_configured": true, 00:22:23.027 "data_offset": 2048, 00:22:23.027 "data_size": 63488 00:22:23.027 }, 00:22:23.027 { 00:22:23.027 "name": "BaseBdev2", 00:22:23.027 "uuid": "5d299be1-b2c3-57de-aa0e-52acaa40d20c", 00:22:23.027 "is_configured": true, 00:22:23.027 "data_offset": 2048, 00:22:23.027 "data_size": 63488 00:22:23.027 }, 00:22:23.027 { 00:22:23.027 "name": "BaseBdev3", 00:22:23.027 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:23.027 "is_configured": true, 00:22:23.027 "data_offset": 2048, 00:22:23.027 "data_size": 63488 00:22:23.027 }, 00:22:23.027 { 00:22:23.027 "name": "BaseBdev4", 00:22:23.027 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:23.027 "is_configured": true, 00:22:23.027 "data_offset": 2048, 00:22:23.027 "data_size": 63488 00:22:23.027 } 00:22:23.027 ] 00:22:23.027 }' 00:22:23.027 05:19:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:23.027 05:19:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:23.027 05:19:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:23.027 05:19:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.027 05:19:42 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:23.027 05:19:42 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:23.027 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:23.027 05:19:42 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:23.027 05:19:42 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:23.027 05:19:42 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:23.027 05:19:42 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:23.286 [2024-07-26 05:19:42.237031] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:23.286 [2024-07-26 05:19:42.237599] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:23.286 [2024-07-26 05:19:42.296736] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:23.545 [2024-07-26 05:19:42.486649] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005930 00:22:23.545 [2024-07-26 05:19:42.486995] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ad0 00:22:23.545 [2024-07-26 05:19:42.614092] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:23.545 05:19:42 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:23.545 05:19:42 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:23.545 05:19:42 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.545 05:19:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:23.545 05:19:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:23.545 05:19:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:23.545 05:19:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:23.546 05:19:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.546 05:19:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.804 05:19:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:23.804 "name": "raid_bdev1", 00:22:23.804 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:23.804 "strip_size_kb": 0, 00:22:23.804 "state": "online", 00:22:23.804 "raid_level": "raid1", 00:22:23.804 "superblock": true, 00:22:23.804 "num_base_bdevs": 4, 00:22:23.804 "num_base_bdevs_discovered": 3, 00:22:23.804 "num_base_bdevs_operational": 3, 00:22:23.804 "process": { 00:22:23.804 "type": "rebuild", 00:22:23.804 "target": "spare", 00:22:23.804 "progress": { 00:22:23.804 "blocks": 20480, 00:22:23.804 "percent": 32 00:22:23.804 } 00:22:23.804 }, 00:22:23.804 "base_bdevs_list": [ 00:22:23.804 { 00:22:23.804 "name": "spare", 00:22:23.804 "uuid": "864634a4-911d-592a-9076-96d1315f1490", 00:22:23.804 "is_configured": true, 00:22:23.804 "data_offset": 2048, 00:22:23.805 "data_size": 63488 00:22:23.805 }, 00:22:23.805 { 00:22:23.805 "name": null, 00:22:23.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.805 "is_configured": false, 00:22:23.805 "data_offset": 2048, 00:22:23.805 "data_size": 63488 00:22:23.805 }, 00:22:23.805 { 00:22:23.805 "name": "BaseBdev3", 00:22:23.805 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:23.805 "is_configured": true, 00:22:23.805 "data_offset": 2048, 00:22:23.805 "data_size": 63488 00:22:23.805 }, 00:22:23.805 { 00:22:23.805 "name": "BaseBdev4", 00:22:23.805 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:23.805 "is_configured": true, 00:22:23.805 "data_offset": 2048, 00:22:23.805 "data_size": 63488 00:22:23.805 } 00:22:23.805 ] 00:22:23.805 }' 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:23.805 [2024-07-26 05:19:42.821739] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@657 -- # local timeout=489 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.805 05:19:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.064 05:19:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:24.064 "name": "raid_bdev1", 00:22:24.064 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:24.064 "strip_size_kb": 0, 00:22:24.064 "state": "online", 00:22:24.064 "raid_level": "raid1", 00:22:24.064 "superblock": true, 00:22:24.064 "num_base_bdevs": 4, 00:22:24.064 "num_base_bdevs_discovered": 3, 00:22:24.064 "num_base_bdevs_operational": 3, 00:22:24.064 "process": { 00:22:24.064 "type": "rebuild", 00:22:24.064 "target": "spare", 00:22:24.064 "progress": { 00:22:24.064 "blocks": 22528, 00:22:24.064 "percent": 35 00:22:24.064 } 00:22:24.064 }, 00:22:24.064 "base_bdevs_list": [ 00:22:24.064 { 00:22:24.064 "name": "spare", 00:22:24.064 "uuid": "864634a4-911d-592a-9076-96d1315f1490", 00:22:24.064 "is_configured": true, 00:22:24.064 "data_offset": 2048, 00:22:24.064 "data_size": 63488 00:22:24.064 }, 00:22:24.064 { 00:22:24.064 "name": null, 00:22:24.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.064 "is_configured": false, 00:22:24.064 "data_offset": 2048, 00:22:24.064 "data_size": 63488 00:22:24.064 }, 00:22:24.064 { 00:22:24.064 "name": "BaseBdev3", 00:22:24.064 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:24.064 "is_configured": true, 00:22:24.064 "data_offset": 2048, 00:22:24.064 "data_size": 63488 00:22:24.064 }, 00:22:24.064 { 00:22:24.064 "name": "BaseBdev4", 00:22:24.064 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:24.064 "is_configured": true, 00:22:24.064 "data_offset": 2048, 00:22:24.064 "data_size": 63488 00:22:24.064 } 00:22:24.064 ] 00:22:24.064 }' 00:22:24.064 05:19:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:24.064 05:19:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.064 05:19:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:24.064 05:19:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.064 05:19:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:24.064 [2024-07-26 05:19:43.161864] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:24.631 [2024-07-26 05:19:43.622227] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:24.890 [2024-07-26 05:19:43.850284] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:25.149 05:19:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:25.149 05:19:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.149 05:19:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:25.149 05:19:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:25.149 05:19:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:25.149 05:19:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:25.149 05:19:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.149 05:19:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.149 [2024-07-26 05:19:44.066837] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:25.409 05:19:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:25.409 "name": "raid_bdev1", 00:22:25.409 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:25.409 "strip_size_kb": 0, 00:22:25.409 "state": "online", 00:22:25.409 "raid_level": "raid1", 00:22:25.409 "superblock": true, 00:22:25.409 "num_base_bdevs": 4, 00:22:25.409 "num_base_bdevs_discovered": 3, 00:22:25.409 "num_base_bdevs_operational": 3, 00:22:25.409 "process": { 00:22:25.409 "type": "rebuild", 00:22:25.409 "target": "spare", 00:22:25.409 "progress": { 00:22:25.409 "blocks": 43008, 00:22:25.409 "percent": 67 00:22:25.409 } 00:22:25.409 }, 00:22:25.409 "base_bdevs_list": [ 00:22:25.409 { 00:22:25.409 "name": "spare", 00:22:25.409 "uuid": "864634a4-911d-592a-9076-96d1315f1490", 00:22:25.409 "is_configured": true, 00:22:25.409 "data_offset": 2048, 00:22:25.409 "data_size": 63488 00:22:25.409 }, 00:22:25.409 { 00:22:25.409 "name": null, 00:22:25.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.409 "is_configured": false, 00:22:25.409 "data_offset": 2048, 00:22:25.409 "data_size": 63488 00:22:25.409 }, 00:22:25.409 { 00:22:25.409 "name": "BaseBdev3", 00:22:25.409 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:25.409 "is_configured": true, 00:22:25.409 "data_offset": 2048, 00:22:25.409 "data_size": 63488 00:22:25.409 }, 00:22:25.409 { 00:22:25.409 "name": "BaseBdev4", 00:22:25.409 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:25.409 "is_configured": true, 00:22:25.409 "data_offset": 2048, 00:22:25.409 "data_size": 63488 00:22:25.409 } 00:22:25.409 ] 00:22:25.409 }' 00:22:25.409 05:19:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:25.409 05:19:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.409 05:19:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:25.409 05:19:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.409 05:19:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:25.668 [2024-07-26 05:19:44.719456] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:26.237 [2024-07-26 05:19:45.057042] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:22:26.237 05:19:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:26.237 05:19:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.237 05:19:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.237 05:19:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:26.237 05:19:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:26.237 05:19:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.237 05:19:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.237 05:19:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.496 [2024-07-26 05:19:45.491240] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:26.496 05:19:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:26.496 "name": "raid_bdev1", 00:22:26.496 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:26.496 "strip_size_kb": 0, 00:22:26.496 "state": "online", 00:22:26.496 "raid_level": "raid1", 00:22:26.496 "superblock": true, 00:22:26.496 "num_base_bdevs": 4, 00:22:26.496 "num_base_bdevs_discovered": 3, 00:22:26.496 "num_base_bdevs_operational": 3, 00:22:26.496 "process": { 00:22:26.496 "type": "rebuild", 00:22:26.496 "target": "spare", 00:22:26.496 "progress": { 00:22:26.496 "blocks": 61440, 00:22:26.496 "percent": 96 00:22:26.496 } 00:22:26.496 }, 00:22:26.496 "base_bdevs_list": [ 00:22:26.496 { 00:22:26.496 "name": "spare", 00:22:26.496 "uuid": "864634a4-911d-592a-9076-96d1315f1490", 00:22:26.496 "is_configured": true, 00:22:26.497 "data_offset": 2048, 00:22:26.497 "data_size": 63488 00:22:26.497 }, 00:22:26.497 { 00:22:26.497 "name": null, 00:22:26.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.497 "is_configured": false, 00:22:26.497 "data_offset": 2048, 00:22:26.497 "data_size": 63488 00:22:26.497 }, 00:22:26.497 { 00:22:26.497 "name": "BaseBdev3", 00:22:26.497 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:26.497 "is_configured": true, 00:22:26.497 "data_offset": 2048, 00:22:26.497 "data_size": 63488 00:22:26.497 }, 00:22:26.497 { 00:22:26.497 "name": "BaseBdev4", 00:22:26.497 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:26.497 "is_configured": true, 00:22:26.497 "data_offset": 2048, 00:22:26.497 "data_size": 63488 00:22:26.497 } 00:22:26.497 ] 00:22:26.497 }' 00:22:26.497 05:19:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:26.497 05:19:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.497 05:19:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:26.497 05:19:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.497 05:19:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:26.497 [2024-07-26 05:19:45.591251] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:26.497 [2024-07-26 05:19:45.593275] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.434 05:19:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:27.434 05:19:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.434 05:19:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:27.434 05:19:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:27.434 05:19:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:27.434 05:19:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:27.434 05:19:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.434 05:19:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:27.694 "name": "raid_bdev1", 00:22:27.694 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:27.694 "strip_size_kb": 0, 00:22:27.694 "state": "online", 00:22:27.694 "raid_level": "raid1", 00:22:27.694 "superblock": true, 00:22:27.694 "num_base_bdevs": 4, 00:22:27.694 "num_base_bdevs_discovered": 3, 00:22:27.694 "num_base_bdevs_operational": 3, 00:22:27.694 "base_bdevs_list": [ 00:22:27.694 { 00:22:27.694 "name": "spare", 00:22:27.694 "uuid": "864634a4-911d-592a-9076-96d1315f1490", 00:22:27.694 "is_configured": true, 00:22:27.694 "data_offset": 2048, 00:22:27.694 "data_size": 63488 00:22:27.694 }, 00:22:27.694 { 00:22:27.694 "name": null, 00:22:27.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.694 "is_configured": false, 00:22:27.694 "data_offset": 2048, 00:22:27.694 "data_size": 63488 00:22:27.694 }, 00:22:27.694 { 00:22:27.694 "name": "BaseBdev3", 00:22:27.694 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:27.694 "is_configured": true, 00:22:27.694 "data_offset": 2048, 00:22:27.694 "data_size": 63488 00:22:27.694 }, 00:22:27.694 { 00:22:27.694 "name": "BaseBdev4", 00:22:27.694 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:27.694 "is_configured": true, 00:22:27.694 "data_offset": 2048, 00:22:27.694 "data_size": 63488 00:22:27.694 } 00:22:27.694 ] 00:22:27.694 }' 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@660 -- # break 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.694 05:19:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.953 05:19:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:27.953 "name": "raid_bdev1", 00:22:27.953 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:27.953 "strip_size_kb": 0, 00:22:27.953 "state": "online", 00:22:27.953 "raid_level": "raid1", 00:22:27.953 "superblock": true, 00:22:27.953 "num_base_bdevs": 4, 00:22:27.954 "num_base_bdevs_discovered": 3, 00:22:27.954 "num_base_bdevs_operational": 3, 00:22:27.954 "base_bdevs_list": [ 00:22:27.954 { 00:22:27.954 "name": "spare", 00:22:27.954 "uuid": "864634a4-911d-592a-9076-96d1315f1490", 00:22:27.954 "is_configured": true, 00:22:27.954 "data_offset": 2048, 00:22:27.954 "data_size": 63488 00:22:27.954 }, 00:22:27.954 { 00:22:27.954 "name": null, 00:22:27.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.954 "is_configured": false, 00:22:27.954 "data_offset": 2048, 00:22:27.954 "data_size": 63488 00:22:27.954 }, 00:22:27.954 { 00:22:27.954 "name": "BaseBdev3", 00:22:27.954 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:27.954 "is_configured": true, 00:22:27.954 "data_offset": 2048, 00:22:27.954 "data_size": 63488 00:22:27.954 }, 00:22:27.954 { 00:22:27.954 "name": "BaseBdev4", 00:22:27.954 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:27.954 "is_configured": true, 00:22:27.954 "data_offset": 2048, 00:22:27.954 "data_size": 63488 00:22:27.954 } 00:22:27.954 ] 00:22:27.954 }' 00:22:27.954 05:19:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.213 05:19:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.472 05:19:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:28.472 "name": "raid_bdev1", 00:22:28.472 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:28.472 "strip_size_kb": 0, 00:22:28.472 "state": "online", 00:22:28.473 "raid_level": "raid1", 00:22:28.473 "superblock": true, 00:22:28.473 "num_base_bdevs": 4, 00:22:28.473 "num_base_bdevs_discovered": 3, 00:22:28.473 "num_base_bdevs_operational": 3, 00:22:28.473 "base_bdevs_list": [ 00:22:28.473 { 00:22:28.473 "name": "spare", 00:22:28.473 "uuid": "864634a4-911d-592a-9076-96d1315f1490", 00:22:28.473 "is_configured": true, 00:22:28.473 "data_offset": 2048, 00:22:28.473 "data_size": 63488 00:22:28.473 }, 00:22:28.473 { 00:22:28.473 "name": null, 00:22:28.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.473 "is_configured": false, 00:22:28.473 "data_offset": 2048, 00:22:28.473 "data_size": 63488 00:22:28.473 }, 00:22:28.473 { 00:22:28.473 "name": "BaseBdev3", 00:22:28.473 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:28.473 "is_configured": true, 00:22:28.473 "data_offset": 2048, 00:22:28.473 "data_size": 63488 00:22:28.473 }, 00:22:28.473 { 00:22:28.473 "name": "BaseBdev4", 00:22:28.473 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:28.473 "is_configured": true, 00:22:28.473 "data_offset": 2048, 00:22:28.473 "data_size": 63488 00:22:28.473 } 00:22:28.473 ] 00:22:28.473 }' 00:22:28.473 05:19:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:28.473 05:19:47 -- common/autotest_common.sh@10 -- # set +x 00:22:28.473 05:19:47 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:28.732 [2024-07-26 05:19:47.742724] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:28.732 [2024-07-26 05:19:47.742761] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:28.732 00:22:28.732 Latency(us) 00:22:28.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.732 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:28.732 raid_bdev1 : 10.71 98.52 295.56 0.00 0.00 14085.38 266.24 111530.36 00:22:28.732 =================================================================================================================== 00:22:28.732 Total : 98.52 295.56 0.00 0.00 14085.38 266.24 111530.36 00:22:28.732 [2024-07-26 05:19:47.789846] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.732 [2024-07-26 05:19:47.789891] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:28.732 [2024-07-26 05:19:47.789989] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:28.732 [2024-07-26 05:19:47.790040] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:22:28.732 0 00:22:28.732 05:19:47 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.732 05:19:47 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:28.990 05:19:47 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:28.990 05:19:47 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:28.990 05:19:47 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:28.990 05:19:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:28.990 05:19:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:28.990 05:19:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:28.990 05:19:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:28.990 05:19:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:28.990 05:19:47 -- bdev/nbd_common.sh@12 -- # local i 00:22:28.990 05:19:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:28.990 05:19:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:28.990 05:19:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:29.249 /dev/nbd0 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:29.249 05:19:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:29.249 05:19:48 -- common/autotest_common.sh@857 -- # local i 00:22:29.249 05:19:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:29.249 05:19:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:29.249 05:19:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:29.249 05:19:48 -- common/autotest_common.sh@861 -- # break 00:22:29.249 05:19:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:29.249 05:19:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:29.249 05:19:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:29.249 1+0 records in 00:22:29.249 1+0 records out 00:22:29.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331086 s, 12.4 MB/s 00:22:29.249 05:19:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.249 05:19:48 -- common/autotest_common.sh@874 -- # size=4096 00:22:29.249 05:19:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.249 05:19:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:29.249 05:19:48 -- common/autotest_common.sh@877 -- # return 0 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:29.249 05:19:48 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:29.249 05:19:48 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:29.249 05:19:48 -- bdev/bdev_raid.sh@678 -- # continue 00:22:29.249 05:19:48 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:29.249 05:19:48 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:29.249 05:19:48 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@12 -- # local i 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:29.249 05:19:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:29.508 /dev/nbd1 00:22:29.508 05:19:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:29.508 05:19:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:29.508 05:19:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:29.508 05:19:48 -- common/autotest_common.sh@857 -- # local i 00:22:29.508 05:19:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:29.508 05:19:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:29.508 05:19:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:29.508 05:19:48 -- common/autotest_common.sh@861 -- # break 00:22:29.508 05:19:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:29.508 05:19:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:29.508 05:19:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:29.508 1+0 records in 00:22:29.508 1+0 records out 00:22:29.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216806 s, 18.9 MB/s 00:22:29.508 05:19:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.508 05:19:48 -- common/autotest_common.sh@874 -- # size=4096 00:22:29.508 05:19:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.508 05:19:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:29.508 05:19:48 -- common/autotest_common.sh@877 -- # return 0 00:22:29.508 05:19:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:29.508 05:19:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:29.508 05:19:48 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:29.767 05:19:48 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:29.767 05:19:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:29.767 05:19:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:29.767 05:19:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:29.767 05:19:48 -- bdev/nbd_common.sh@51 -- # local i 00:22:29.767 05:19:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:29.767 05:19:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@41 -- # break 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.026 05:19:48 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:30.026 05:19:48 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:30.026 05:19:48 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@12 -- # local i 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:30.026 05:19:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:30.285 /dev/nbd1 00:22:30.285 05:19:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:30.285 05:19:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:30.285 05:19:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:30.285 05:19:49 -- common/autotest_common.sh@857 -- # local i 00:22:30.285 05:19:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:30.285 05:19:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:30.285 05:19:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:30.285 05:19:49 -- common/autotest_common.sh@861 -- # break 00:22:30.285 05:19:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:30.285 05:19:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:30.285 05:19:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:30.285 1+0 records in 00:22:30.285 1+0 records out 00:22:30.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261342 s, 15.7 MB/s 00:22:30.285 05:19:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:30.285 05:19:49 -- common/autotest_common.sh@874 -- # size=4096 00:22:30.285 05:19:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:30.285 05:19:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:30.285 05:19:49 -- common/autotest_common.sh@877 -- # return 0 00:22:30.285 05:19:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:30.285 05:19:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:30.285 05:19:49 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:30.285 05:19:49 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:30.285 05:19:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:30.285 05:19:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:30.285 05:19:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:30.285 05:19:49 -- bdev/nbd_common.sh@51 -- # local i 00:22:30.285 05:19:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:30.285 05:19:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@41 -- # break 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.543 05:19:49 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@51 -- # local i 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:30.543 05:19:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:30.801 05:19:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:30.801 05:19:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:30.801 05:19:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:30.801 05:19:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.801 05:19:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.801 05:19:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:30.801 05:19:49 -- bdev/nbd_common.sh@41 -- # break 00:22:30.801 05:19:49 -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.801 05:19:49 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:30.801 05:19:49 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:30.801 05:19:49 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:30.801 05:19:49 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:31.059 05:19:49 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:31.059 [2024-07-26 05:19:50.087628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:31.059 [2024-07-26 05:19:50.087708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.059 [2024-07-26 05:19:50.087740] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:22:31.059 [2024-07-26 05:19:50.087754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.059 [2024-07-26 05:19:50.089915] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.059 [2024-07-26 05:19:50.089960] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:31.059 [2024-07-26 05:19:50.090067] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:31.060 [2024-07-26 05:19:50.090130] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:31.060 BaseBdev1 00:22:31.060 05:19:50 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:31.060 05:19:50 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:31.060 05:19:50 -- bdev/bdev_raid.sh@696 -- # continue 00:22:31.060 05:19:50 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:31.060 05:19:50 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:31.060 05:19:50 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:31.317 05:19:50 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:31.575 [2024-07-26 05:19:50.531774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:31.575 [2024-07-26 05:19:50.531848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.575 [2024-07-26 05:19:50.531880] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:22:31.575 [2024-07-26 05:19:50.531894] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.575 [2024-07-26 05:19:50.532402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.575 [2024-07-26 05:19:50.532438] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:31.575 [2024-07-26 05:19:50.532547] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:31.575 [2024-07-26 05:19:50.532568] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:31.575 [2024-07-26 05:19:50.532583] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:31.575 [2024-07-26 05:19:50.532621] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state configuring 00:22:31.575 [2024-07-26 05:19:50.532697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:31.575 BaseBdev3 00:22:31.575 05:19:50 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:31.575 05:19:50 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:31.575 05:19:50 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:31.833 05:19:50 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:32.091 [2024-07-26 05:19:50.951895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:32.091 [2024-07-26 05:19:50.951958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.092 [2024-07-26 05:19:50.951991] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:22:32.092 [2024-07-26 05:19:50.952048] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.092 [2024-07-26 05:19:50.952547] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.092 [2024-07-26 05:19:50.952576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:32.092 [2024-07-26 05:19:50.952701] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:32.092 [2024-07-26 05:19:50.952728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:32.092 BaseBdev4 00:22:32.092 05:19:50 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:32.350 [2024-07-26 05:19:51.380034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:32.350 [2024-07-26 05:19:51.380092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.350 [2024-07-26 05:19:51.380123] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:22:32.350 [2024-07-26 05:19:51.380135] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.350 [2024-07-26 05:19:51.381171] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.350 [2024-07-26 05:19:51.381201] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:32.350 [2024-07-26 05:19:51.381300] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:32.350 [2024-07-26 05:19:51.381329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:32.350 spare 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.350 05:19:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.608 [2024-07-26 05:19:51.481458] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c380 00:22:32.608 [2024-07-26 05:19:51.481653] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:32.608 [2024-07-26 05:19:51.481845] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000036870 00:22:32.609 [2024-07-26 05:19:51.482502] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c380 00:22:32.609 [2024-07-26 05:19:51.482680] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c380 00:22:32.609 [2024-07-26 05:19:51.483005] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.609 05:19:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:32.609 "name": "raid_bdev1", 00:22:32.609 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:32.609 "strip_size_kb": 0, 00:22:32.609 "state": "online", 00:22:32.609 "raid_level": "raid1", 00:22:32.609 "superblock": true, 00:22:32.609 "num_base_bdevs": 4, 00:22:32.609 "num_base_bdevs_discovered": 3, 00:22:32.609 "num_base_bdevs_operational": 3, 00:22:32.609 "base_bdevs_list": [ 00:22:32.609 { 00:22:32.609 "name": "spare", 00:22:32.609 "uuid": "864634a4-911d-592a-9076-96d1315f1490", 00:22:32.609 "is_configured": true, 00:22:32.609 "data_offset": 2048, 00:22:32.609 "data_size": 63488 00:22:32.609 }, 00:22:32.609 { 00:22:32.609 "name": null, 00:22:32.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.609 "is_configured": false, 00:22:32.609 "data_offset": 2048, 00:22:32.609 "data_size": 63488 00:22:32.609 }, 00:22:32.609 { 00:22:32.609 "name": "BaseBdev3", 00:22:32.609 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:32.609 "is_configured": true, 00:22:32.609 "data_offset": 2048, 00:22:32.609 "data_size": 63488 00:22:32.609 }, 00:22:32.609 { 00:22:32.609 "name": "BaseBdev4", 00:22:32.609 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:32.609 "is_configured": true, 00:22:32.609 "data_offset": 2048, 00:22:32.609 "data_size": 63488 00:22:32.609 } 00:22:32.609 ] 00:22:32.609 }' 00:22:32.609 05:19:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:32.609 05:19:51 -- common/autotest_common.sh@10 -- # set +x 00:22:32.867 05:19:51 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:32.867 05:19:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.867 05:19:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:32.867 05:19:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:32.867 05:19:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.867 05:19:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.867 05:19:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.126 05:19:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.126 "name": "raid_bdev1", 00:22:33.126 "uuid": "f28b39b2-37e6-466a-b518-ff2b8a51a607", 00:22:33.126 "strip_size_kb": 0, 00:22:33.126 "state": "online", 00:22:33.126 "raid_level": "raid1", 00:22:33.126 "superblock": true, 00:22:33.126 "num_base_bdevs": 4, 00:22:33.126 "num_base_bdevs_discovered": 3, 00:22:33.126 "num_base_bdevs_operational": 3, 00:22:33.126 "base_bdevs_list": [ 00:22:33.126 { 00:22:33.126 "name": "spare", 00:22:33.126 "uuid": "864634a4-911d-592a-9076-96d1315f1490", 00:22:33.126 "is_configured": true, 00:22:33.126 "data_offset": 2048, 00:22:33.126 "data_size": 63488 00:22:33.126 }, 00:22:33.126 { 00:22:33.126 "name": null, 00:22:33.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.126 "is_configured": false, 00:22:33.126 "data_offset": 2048, 00:22:33.126 "data_size": 63488 00:22:33.126 }, 00:22:33.126 { 00:22:33.126 "name": "BaseBdev3", 00:22:33.126 "uuid": "67831fdc-61ed-590e-a564-4babfc4bc7b9", 00:22:33.126 "is_configured": true, 00:22:33.126 "data_offset": 2048, 00:22:33.126 "data_size": 63488 00:22:33.126 }, 00:22:33.126 { 00:22:33.126 "name": "BaseBdev4", 00:22:33.126 "uuid": "d72adaf9-5381-5af2-a2b8-ecb9c7b46483", 00:22:33.126 "is_configured": true, 00:22:33.126 "data_offset": 2048, 00:22:33.126 "data_size": 63488 00:22:33.126 } 00:22:33.126 ] 00:22:33.126 }' 00:22:33.126 05:19:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.126 05:19:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:33.126 05:19:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.126 05:19:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:33.126 05:19:52 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.126 05:19:52 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:33.385 05:19:52 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.385 05:19:52 -- bdev/bdev_raid.sh@709 -- # killprocess 81698 00:22:33.385 05:19:52 -- common/autotest_common.sh@926 -- # '[' -z 81698 ']' 00:22:33.385 05:19:52 -- common/autotest_common.sh@930 -- # kill -0 81698 00:22:33.385 05:19:52 -- common/autotest_common.sh@931 -- # uname 00:22:33.385 05:19:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:33.385 05:19:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81698 00:22:33.385 killing process with pid 81698 00:22:33.385 Received shutdown signal, test time was about 15.290053 seconds 00:22:33.385 00:22:33.385 Latency(us) 00:22:33.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.385 =================================================================================================================== 00:22:33.385 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.385 05:19:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:33.385 05:19:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:33.385 05:19:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81698' 00:22:33.385 05:19:52 -- common/autotest_common.sh@945 -- # kill 81698 00:22:33.385 05:19:52 -- common/autotest_common.sh@950 -- # wait 81698 00:22:33.385 [2024-07-26 05:19:52.356217] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:33.385 [2024-07-26 05:19:52.356308] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.385 [2024-07-26 05:19:52.356464] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.385 [2024-07-26 05:19:52.356486] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c380 name raid_bdev1, state offline 00:22:33.643 [2024-07-26 05:19:52.628394] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:34.579 ************************************ 00:22:34.579 END TEST raid_rebuild_test_sb_io 00:22:34.579 ************************************ 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:34.579 00:22:34.579 real 0m20.874s 00:22:34.579 user 0m31.159s 00:22:34.579 sys 0m2.740s 00:22:34.579 05:19:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.579 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:22:34.579 05:19:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:34.579 05:19:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:34.579 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:22:34.579 ************************************ 00:22:34.579 START TEST raid5f_state_function_test 00:22:34.579 ************************************ 00:22:34.579 05:19:53 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=82258 00:22:34.579 Process raid pid: 82258 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 82258' 00:22:34.579 05:19:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 82258 /var/tmp/spdk-raid.sock 00:22:34.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:34.579 05:19:53 -- common/autotest_common.sh@819 -- # '[' -z 82258 ']' 00:22:34.579 05:19:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:34.579 05:19:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:34.579 05:19:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:34.579 05:19:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:34.579 05:19:53 -- common/autotest_common.sh@10 -- # set +x 00:22:34.837 [2024-07-26 05:19:53.716328] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:34.837 [2024-07-26 05:19:53.716521] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.837 [2024-07-26 05:19:53.883078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.096 [2024-07-26 05:19:54.036167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.096 [2024-07-26 05:19:54.180205] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:35.663 05:19:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:35.663 05:19:54 -- common/autotest_common.sh@852 -- # return 0 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:35.663 [2024-07-26 05:19:54.742198] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:35.663 [2024-07-26 05:19:54.742279] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:35.663 [2024-07-26 05:19:54.742295] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:35.663 [2024-07-26 05:19:54.742310] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:35.663 [2024-07-26 05:19:54.742318] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:35.663 [2024-07-26 05:19:54.742331] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.663 05:19:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.921 05:19:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:35.921 "name": "Existed_Raid", 00:22:35.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.921 "strip_size_kb": 64, 00:22:35.921 "state": "configuring", 00:22:35.921 "raid_level": "raid5f", 00:22:35.921 "superblock": false, 00:22:35.921 "num_base_bdevs": 3, 00:22:35.921 "num_base_bdevs_discovered": 0, 00:22:35.921 "num_base_bdevs_operational": 3, 00:22:35.921 "base_bdevs_list": [ 00:22:35.921 { 00:22:35.921 "name": "BaseBdev1", 00:22:35.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.922 "is_configured": false, 00:22:35.922 "data_offset": 0, 00:22:35.922 "data_size": 0 00:22:35.922 }, 00:22:35.922 { 00:22:35.922 "name": "BaseBdev2", 00:22:35.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.922 "is_configured": false, 00:22:35.922 "data_offset": 0, 00:22:35.922 "data_size": 0 00:22:35.922 }, 00:22:35.922 { 00:22:35.922 "name": "BaseBdev3", 00:22:35.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.922 "is_configured": false, 00:22:35.922 "data_offset": 0, 00:22:35.922 "data_size": 0 00:22:35.922 } 00:22:35.922 ] 00:22:35.922 }' 00:22:35.922 05:19:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:35.922 05:19:54 -- common/autotest_common.sh@10 -- # set +x 00:22:36.180 05:19:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:36.439 [2024-07-26 05:19:55.482315] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:36.439 [2024-07-26 05:19:55.482500] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:22:36.439 05:19:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:36.705 [2024-07-26 05:19:55.678384] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:36.705 [2024-07-26 05:19:55.678583] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:36.705 [2024-07-26 05:19:55.678650] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:36.705 [2024-07-26 05:19:55.678682] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:36.705 [2024-07-26 05:19:55.678692] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:36.705 [2024-07-26 05:19:55.678705] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:36.705 05:19:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:36.964 BaseBdev1 00:22:36.964 [2024-07-26 05:19:55.891102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:36.964 05:19:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:36.964 05:19:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:36.964 05:19:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:36.964 05:19:55 -- common/autotest_common.sh@889 -- # local i 00:22:36.964 05:19:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:36.964 05:19:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:36.964 05:19:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:37.223 05:19:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:37.223 [ 00:22:37.223 { 00:22:37.223 "name": "BaseBdev1", 00:22:37.223 "aliases": [ 00:22:37.223 "98197dde-b68b-418e-b896-5378ce9ec956" 00:22:37.223 ], 00:22:37.223 "product_name": "Malloc disk", 00:22:37.223 "block_size": 512, 00:22:37.223 "num_blocks": 65536, 00:22:37.223 "uuid": "98197dde-b68b-418e-b896-5378ce9ec956", 00:22:37.223 "assigned_rate_limits": { 00:22:37.223 "rw_ios_per_sec": 0, 00:22:37.223 "rw_mbytes_per_sec": 0, 00:22:37.223 "r_mbytes_per_sec": 0, 00:22:37.223 "w_mbytes_per_sec": 0 00:22:37.223 }, 00:22:37.223 "claimed": true, 00:22:37.223 "claim_type": "exclusive_write", 00:22:37.223 "zoned": false, 00:22:37.223 "supported_io_types": { 00:22:37.223 "read": true, 00:22:37.223 "write": true, 00:22:37.223 "unmap": true, 00:22:37.223 "write_zeroes": true, 00:22:37.223 "flush": true, 00:22:37.223 "reset": true, 00:22:37.223 "compare": false, 00:22:37.223 "compare_and_write": false, 00:22:37.223 "abort": true, 00:22:37.223 "nvme_admin": false, 00:22:37.223 "nvme_io": false 00:22:37.223 }, 00:22:37.223 "memory_domains": [ 00:22:37.223 { 00:22:37.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.223 "dma_device_type": 2 00:22:37.223 } 00:22:37.223 ], 00:22:37.223 "driver_specific": {} 00:22:37.223 } 00:22:37.223 ] 00:22:37.223 05:19:56 -- common/autotest_common.sh@895 -- # return 0 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.223 05:19:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.482 05:19:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:37.482 "name": "Existed_Raid", 00:22:37.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.482 "strip_size_kb": 64, 00:22:37.482 "state": "configuring", 00:22:37.482 "raid_level": "raid5f", 00:22:37.482 "superblock": false, 00:22:37.482 "num_base_bdevs": 3, 00:22:37.482 "num_base_bdevs_discovered": 1, 00:22:37.482 "num_base_bdevs_operational": 3, 00:22:37.482 "base_bdevs_list": [ 00:22:37.482 { 00:22:37.482 "name": "BaseBdev1", 00:22:37.482 "uuid": "98197dde-b68b-418e-b896-5378ce9ec956", 00:22:37.482 "is_configured": true, 00:22:37.482 "data_offset": 0, 00:22:37.482 "data_size": 65536 00:22:37.482 }, 00:22:37.482 { 00:22:37.482 "name": "BaseBdev2", 00:22:37.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.482 "is_configured": false, 00:22:37.482 "data_offset": 0, 00:22:37.482 "data_size": 0 00:22:37.482 }, 00:22:37.482 { 00:22:37.482 "name": "BaseBdev3", 00:22:37.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.482 "is_configured": false, 00:22:37.482 "data_offset": 0, 00:22:37.482 "data_size": 0 00:22:37.482 } 00:22:37.482 ] 00:22:37.482 }' 00:22:37.482 05:19:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:37.482 05:19:56 -- common/autotest_common.sh@10 -- # set +x 00:22:37.741 05:19:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:38.000 [2024-07-26 05:19:56.987419] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:38.000 [2024-07-26 05:19:56.987468] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:22:38.000 05:19:57 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:22:38.000 05:19:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:38.259 [2024-07-26 05:19:57.171504] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:38.259 [2024-07-26 05:19:57.173308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:38.259 [2024-07-26 05:19:57.173356] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:38.259 [2024-07-26 05:19:57.173369] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:38.259 [2024-07-26 05:19:57.173382] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.259 05:19:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.518 05:19:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:38.518 "name": "Existed_Raid", 00:22:38.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.518 "strip_size_kb": 64, 00:22:38.518 "state": "configuring", 00:22:38.518 "raid_level": "raid5f", 00:22:38.518 "superblock": false, 00:22:38.518 "num_base_bdevs": 3, 00:22:38.518 "num_base_bdevs_discovered": 1, 00:22:38.518 "num_base_bdevs_operational": 3, 00:22:38.518 "base_bdevs_list": [ 00:22:38.518 { 00:22:38.518 "name": "BaseBdev1", 00:22:38.518 "uuid": "98197dde-b68b-418e-b896-5378ce9ec956", 00:22:38.518 "is_configured": true, 00:22:38.518 "data_offset": 0, 00:22:38.518 "data_size": 65536 00:22:38.518 }, 00:22:38.518 { 00:22:38.518 "name": "BaseBdev2", 00:22:38.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.518 "is_configured": false, 00:22:38.518 "data_offset": 0, 00:22:38.518 "data_size": 0 00:22:38.518 }, 00:22:38.518 { 00:22:38.518 "name": "BaseBdev3", 00:22:38.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.518 "is_configured": false, 00:22:38.518 "data_offset": 0, 00:22:38.518 "data_size": 0 00:22:38.518 } 00:22:38.518 ] 00:22:38.518 }' 00:22:38.518 05:19:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:38.518 05:19:57 -- common/autotest_common.sh@10 -- # set +x 00:22:38.778 05:19:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:38.778 [2024-07-26 05:19:57.887466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:39.037 BaseBdev2 00:22:39.037 05:19:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:39.037 05:19:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:22:39.037 05:19:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:39.037 05:19:57 -- common/autotest_common.sh@889 -- # local i 00:22:39.037 05:19:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:39.037 05:19:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:39.037 05:19:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:39.295 05:19:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:39.295 [ 00:22:39.296 { 00:22:39.296 "name": "BaseBdev2", 00:22:39.296 "aliases": [ 00:22:39.296 "8118c3af-489c-4f1a-88db-0d58d77026a6" 00:22:39.296 ], 00:22:39.296 "product_name": "Malloc disk", 00:22:39.296 "block_size": 512, 00:22:39.296 "num_blocks": 65536, 00:22:39.296 "uuid": "8118c3af-489c-4f1a-88db-0d58d77026a6", 00:22:39.296 "assigned_rate_limits": { 00:22:39.296 "rw_ios_per_sec": 0, 00:22:39.296 "rw_mbytes_per_sec": 0, 00:22:39.296 "r_mbytes_per_sec": 0, 00:22:39.296 "w_mbytes_per_sec": 0 00:22:39.296 }, 00:22:39.296 "claimed": true, 00:22:39.296 "claim_type": "exclusive_write", 00:22:39.296 "zoned": false, 00:22:39.296 "supported_io_types": { 00:22:39.296 "read": true, 00:22:39.296 "write": true, 00:22:39.296 "unmap": true, 00:22:39.296 "write_zeroes": true, 00:22:39.296 "flush": true, 00:22:39.296 "reset": true, 00:22:39.296 "compare": false, 00:22:39.296 "compare_and_write": false, 00:22:39.296 "abort": true, 00:22:39.296 "nvme_admin": false, 00:22:39.296 "nvme_io": false 00:22:39.296 }, 00:22:39.296 "memory_domains": [ 00:22:39.296 { 00:22:39.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.296 "dma_device_type": 2 00:22:39.296 } 00:22:39.296 ], 00:22:39.296 "driver_specific": {} 00:22:39.296 } 00:22:39.296 ] 00:22:39.296 05:19:58 -- common/autotest_common.sh@895 -- # return 0 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.296 05:19:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.555 05:19:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:39.555 "name": "Existed_Raid", 00:22:39.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.555 "strip_size_kb": 64, 00:22:39.555 "state": "configuring", 00:22:39.555 "raid_level": "raid5f", 00:22:39.555 "superblock": false, 00:22:39.555 "num_base_bdevs": 3, 00:22:39.555 "num_base_bdevs_discovered": 2, 00:22:39.555 "num_base_bdevs_operational": 3, 00:22:39.555 "base_bdevs_list": [ 00:22:39.555 { 00:22:39.555 "name": "BaseBdev1", 00:22:39.555 "uuid": "98197dde-b68b-418e-b896-5378ce9ec956", 00:22:39.555 "is_configured": true, 00:22:39.555 "data_offset": 0, 00:22:39.555 "data_size": 65536 00:22:39.555 }, 00:22:39.555 { 00:22:39.555 "name": "BaseBdev2", 00:22:39.555 "uuid": "8118c3af-489c-4f1a-88db-0d58d77026a6", 00:22:39.555 "is_configured": true, 00:22:39.555 "data_offset": 0, 00:22:39.555 "data_size": 65536 00:22:39.555 }, 00:22:39.555 { 00:22:39.555 "name": "BaseBdev3", 00:22:39.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.555 "is_configured": false, 00:22:39.555 "data_offset": 0, 00:22:39.555 "data_size": 0 00:22:39.555 } 00:22:39.555 ] 00:22:39.555 }' 00:22:39.555 05:19:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:39.555 05:19:58 -- common/autotest_common.sh@10 -- # set +x 00:22:39.814 05:19:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:40.073 [2024-07-26 05:19:58.974915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:40.073 [2024-07-26 05:19:58.975299] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:22:40.073 [2024-07-26 05:19:58.975330] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:40.073 [2024-07-26 05:19:58.975445] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:22:40.073 [2024-07-26 05:19:58.979886] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:22:40.073 [2024-07-26 05:19:58.979909] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:22:40.073 BaseBdev3 00:22:40.073 [2024-07-26 05:19:58.980231] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.073 05:19:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:40.073 05:19:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:22:40.073 05:19:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:40.073 05:19:58 -- common/autotest_common.sh@889 -- # local i 00:22:40.073 05:19:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:40.073 05:19:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:40.073 05:19:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:40.073 05:19:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:40.333 [ 00:22:40.333 { 00:22:40.333 "name": "BaseBdev3", 00:22:40.333 "aliases": [ 00:22:40.333 "b50adcf0-8a60-4ab6-89e5-a02ac883c7c0" 00:22:40.333 ], 00:22:40.333 "product_name": "Malloc disk", 00:22:40.333 "block_size": 512, 00:22:40.333 "num_blocks": 65536, 00:22:40.333 "uuid": "b50adcf0-8a60-4ab6-89e5-a02ac883c7c0", 00:22:40.333 "assigned_rate_limits": { 00:22:40.333 "rw_ios_per_sec": 0, 00:22:40.333 "rw_mbytes_per_sec": 0, 00:22:40.333 "r_mbytes_per_sec": 0, 00:22:40.333 "w_mbytes_per_sec": 0 00:22:40.333 }, 00:22:40.333 "claimed": true, 00:22:40.333 "claim_type": "exclusive_write", 00:22:40.333 "zoned": false, 00:22:40.333 "supported_io_types": { 00:22:40.333 "read": true, 00:22:40.333 "write": true, 00:22:40.333 "unmap": true, 00:22:40.333 "write_zeroes": true, 00:22:40.333 "flush": true, 00:22:40.333 "reset": true, 00:22:40.333 "compare": false, 00:22:40.333 "compare_and_write": false, 00:22:40.333 "abort": true, 00:22:40.333 "nvme_admin": false, 00:22:40.333 "nvme_io": false 00:22:40.333 }, 00:22:40.333 "memory_domains": [ 00:22:40.333 { 00:22:40.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.333 "dma_device_type": 2 00:22:40.333 } 00:22:40.333 ], 00:22:40.333 "driver_specific": {} 00:22:40.334 } 00:22:40.334 ] 00:22:40.334 05:19:59 -- common/autotest_common.sh@895 -- # return 0 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.334 05:19:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.593 05:19:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:40.593 "name": "Existed_Raid", 00:22:40.593 "uuid": "b1987ce4-d5ed-4991-b47a-b1214e9e55c0", 00:22:40.593 "strip_size_kb": 64, 00:22:40.593 "state": "online", 00:22:40.593 "raid_level": "raid5f", 00:22:40.593 "superblock": false, 00:22:40.593 "num_base_bdevs": 3, 00:22:40.593 "num_base_bdevs_discovered": 3, 00:22:40.593 "num_base_bdevs_operational": 3, 00:22:40.593 "base_bdevs_list": [ 00:22:40.593 { 00:22:40.593 "name": "BaseBdev1", 00:22:40.593 "uuid": "98197dde-b68b-418e-b896-5378ce9ec956", 00:22:40.593 "is_configured": true, 00:22:40.593 "data_offset": 0, 00:22:40.593 "data_size": 65536 00:22:40.593 }, 00:22:40.593 { 00:22:40.593 "name": "BaseBdev2", 00:22:40.593 "uuid": "8118c3af-489c-4f1a-88db-0d58d77026a6", 00:22:40.593 "is_configured": true, 00:22:40.593 "data_offset": 0, 00:22:40.593 "data_size": 65536 00:22:40.593 }, 00:22:40.593 { 00:22:40.593 "name": "BaseBdev3", 00:22:40.593 "uuid": "b50adcf0-8a60-4ab6-89e5-a02ac883c7c0", 00:22:40.593 "is_configured": true, 00:22:40.593 "data_offset": 0, 00:22:40.593 "data_size": 65536 00:22:40.593 } 00:22:40.593 ] 00:22:40.593 }' 00:22:40.593 05:19:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:40.593 05:19:59 -- common/autotest_common.sh@10 -- # set +x 00:22:40.852 05:19:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:41.111 [2024-07-26 05:19:59.977131] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.111 05:20:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.370 05:20:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:41.370 "name": "Existed_Raid", 00:22:41.370 "uuid": "b1987ce4-d5ed-4991-b47a-b1214e9e55c0", 00:22:41.370 "strip_size_kb": 64, 00:22:41.370 "state": "online", 00:22:41.370 "raid_level": "raid5f", 00:22:41.370 "superblock": false, 00:22:41.370 "num_base_bdevs": 3, 00:22:41.370 "num_base_bdevs_discovered": 2, 00:22:41.370 "num_base_bdevs_operational": 2, 00:22:41.370 "base_bdevs_list": [ 00:22:41.370 { 00:22:41.370 "name": null, 00:22:41.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.370 "is_configured": false, 00:22:41.370 "data_offset": 0, 00:22:41.370 "data_size": 65536 00:22:41.370 }, 00:22:41.370 { 00:22:41.370 "name": "BaseBdev2", 00:22:41.370 "uuid": "8118c3af-489c-4f1a-88db-0d58d77026a6", 00:22:41.370 "is_configured": true, 00:22:41.370 "data_offset": 0, 00:22:41.370 "data_size": 65536 00:22:41.370 }, 00:22:41.370 { 00:22:41.371 "name": "BaseBdev3", 00:22:41.371 "uuid": "b50adcf0-8a60-4ab6-89e5-a02ac883c7c0", 00:22:41.371 "is_configured": true, 00:22:41.371 "data_offset": 0, 00:22:41.371 "data_size": 65536 00:22:41.371 } 00:22:41.371 ] 00:22:41.371 }' 00:22:41.371 05:20:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:41.371 05:20:00 -- common/autotest_common.sh@10 -- # set +x 00:22:41.629 05:20:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:41.629 05:20:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:41.629 05:20:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.629 05:20:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:41.888 05:20:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:41.888 05:20:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:41.888 05:20:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:41.888 [2024-07-26 05:20:00.914756] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:41.888 [2024-07-26 05:20:00.914944] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:41.888 [2024-07-26 05:20:00.915085] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:41.888 05:20:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:41.888 05:20:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:41.888 05:20:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:41.888 05:20:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.147 05:20:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:42.147 05:20:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:42.147 05:20:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:42.406 [2024-07-26 05:20:01.457101] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:42.406 [2024-07-26 05:20:01.457293] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:22:42.664 05:20:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:42.664 05:20:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:42.664 05:20:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:42.664 05:20:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.664 05:20:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:42.664 05:20:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:42.664 05:20:01 -- bdev/bdev_raid.sh@287 -- # killprocess 82258 00:22:42.664 05:20:01 -- common/autotest_common.sh@926 -- # '[' -z 82258 ']' 00:22:42.664 05:20:01 -- common/autotest_common.sh@930 -- # kill -0 82258 00:22:42.664 05:20:01 -- common/autotest_common.sh@931 -- # uname 00:22:42.664 05:20:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:42.664 05:20:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82258 00:22:42.664 killing process with pid 82258 00:22:42.664 05:20:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:42.664 05:20:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:42.664 05:20:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82258' 00:22:42.664 05:20:01 -- common/autotest_common.sh@945 -- # kill 82258 00:22:42.664 05:20:01 -- common/autotest_common.sh@950 -- # wait 82258 00:22:42.664 [2024-07-26 05:20:01.755538] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:42.664 [2024-07-26 05:20:01.755633] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:43.600 05:20:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:43.600 00:22:43.600 real 0m9.042s 00:22:43.600 user 0m14.847s 00:22:43.600 sys 0m1.390s 00:22:43.600 05:20:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.600 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:43.600 ************************************ 00:22:43.600 END TEST raid5f_state_function_test 00:22:43.600 ************************************ 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:22:43.859 05:20:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:43.859 05:20:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:43.859 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:43.859 ************************************ 00:22:43.859 START TEST raid5f_state_function_test_sb 00:22:43.859 ************************************ 00:22:43.859 05:20:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:43.859 Process raid pid: 82587 00:22:43.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=82587 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 82587' 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 82587 /var/tmp/spdk-raid.sock 00:22:43.859 05:20:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:43.859 05:20:02 -- common/autotest_common.sh@819 -- # '[' -z 82587 ']' 00:22:43.859 05:20:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:43.859 05:20:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:43.859 05:20:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:43.859 05:20:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:43.859 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:22:43.859 [2024-07-26 05:20:02.804229] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:43.859 [2024-07-26 05:20:02.804386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.118 [2024-07-26 05:20:02.977292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.118 [2024-07-26 05:20:03.125200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.375 [2024-07-26 05:20:03.269773] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:44.633 05:20:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:44.633 05:20:03 -- common/autotest_common.sh@852 -- # return 0 00:22:44.633 05:20:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:44.890 [2024-07-26 05:20:03.902351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.890 [2024-07-26 05:20:03.902449] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.890 [2024-07-26 05:20:03.902479] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:44.890 [2024-07-26 05:20:03.902495] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:44.890 [2024-07-26 05:20:03.902503] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:44.890 [2024-07-26 05:20:03.902515] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:44.890 05:20:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:44.890 05:20:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:44.890 05:20:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:44.890 05:20:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:44.890 05:20:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:44.890 05:20:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:44.890 05:20:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:44.891 05:20:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:44.891 05:20:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:44.891 05:20:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:44.891 05:20:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.891 05:20:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.149 05:20:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.149 "name": "Existed_Raid", 00:22:45.149 "uuid": "ad51bf40-a616-4c2e-acf0-64f7985f397c", 00:22:45.149 "strip_size_kb": 64, 00:22:45.149 "state": "configuring", 00:22:45.149 "raid_level": "raid5f", 00:22:45.149 "superblock": true, 00:22:45.149 "num_base_bdevs": 3, 00:22:45.149 "num_base_bdevs_discovered": 0, 00:22:45.149 "num_base_bdevs_operational": 3, 00:22:45.149 "base_bdevs_list": [ 00:22:45.149 { 00:22:45.149 "name": "BaseBdev1", 00:22:45.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.149 "is_configured": false, 00:22:45.149 "data_offset": 0, 00:22:45.149 "data_size": 0 00:22:45.149 }, 00:22:45.149 { 00:22:45.149 "name": "BaseBdev2", 00:22:45.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.149 "is_configured": false, 00:22:45.149 "data_offset": 0, 00:22:45.149 "data_size": 0 00:22:45.149 }, 00:22:45.149 { 00:22:45.149 "name": "BaseBdev3", 00:22:45.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.149 "is_configured": false, 00:22:45.149 "data_offset": 0, 00:22:45.149 "data_size": 0 00:22:45.149 } 00:22:45.149 ] 00:22:45.149 }' 00:22:45.149 05:20:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.149 05:20:04 -- common/autotest_common.sh@10 -- # set +x 00:22:45.407 05:20:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:45.666 [2024-07-26 05:20:04.582355] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:45.666 [2024-07-26 05:20:04.582431] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:22:45.666 05:20:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:45.925 [2024-07-26 05:20:04.830481] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:45.925 [2024-07-26 05:20:04.830566] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:45.925 [2024-07-26 05:20:04.830580] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:45.925 [2024-07-26 05:20:04.830597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:45.925 [2024-07-26 05:20:04.830605] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:45.925 [2024-07-26 05:20:04.830660] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:45.925 05:20:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:46.184 [2024-07-26 05:20:05.098888] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.184 BaseBdev1 00:22:46.184 05:20:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:46.184 05:20:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:46.184 05:20:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:46.184 05:20:05 -- common/autotest_common.sh@889 -- # local i 00:22:46.184 05:20:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:46.184 05:20:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:46.184 05:20:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:46.184 05:20:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:46.443 [ 00:22:46.443 { 00:22:46.443 "name": "BaseBdev1", 00:22:46.443 "aliases": [ 00:22:46.443 "05c9a65c-e939-41ab-a430-09642e2f50e7" 00:22:46.443 ], 00:22:46.443 "product_name": "Malloc disk", 00:22:46.443 "block_size": 512, 00:22:46.443 "num_blocks": 65536, 00:22:46.443 "uuid": "05c9a65c-e939-41ab-a430-09642e2f50e7", 00:22:46.443 "assigned_rate_limits": { 00:22:46.443 "rw_ios_per_sec": 0, 00:22:46.443 "rw_mbytes_per_sec": 0, 00:22:46.443 "r_mbytes_per_sec": 0, 00:22:46.443 "w_mbytes_per_sec": 0 00:22:46.443 }, 00:22:46.443 "claimed": true, 00:22:46.443 "claim_type": "exclusive_write", 00:22:46.443 "zoned": false, 00:22:46.443 "supported_io_types": { 00:22:46.443 "read": true, 00:22:46.443 "write": true, 00:22:46.443 "unmap": true, 00:22:46.443 "write_zeroes": true, 00:22:46.443 "flush": true, 00:22:46.443 "reset": true, 00:22:46.443 "compare": false, 00:22:46.443 "compare_and_write": false, 00:22:46.443 "abort": true, 00:22:46.443 "nvme_admin": false, 00:22:46.443 "nvme_io": false 00:22:46.443 }, 00:22:46.443 "memory_domains": [ 00:22:46.443 { 00:22:46.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.443 "dma_device_type": 2 00:22:46.443 } 00:22:46.443 ], 00:22:46.443 "driver_specific": {} 00:22:46.443 } 00:22:46.443 ] 00:22:46.443 05:20:05 -- common/autotest_common.sh@895 -- # return 0 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.443 05:20:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.702 05:20:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:46.702 "name": "Existed_Raid", 00:22:46.702 "uuid": "924f756a-11c2-42ef-9fc0-fbc6959335bd", 00:22:46.702 "strip_size_kb": 64, 00:22:46.702 "state": "configuring", 00:22:46.702 "raid_level": "raid5f", 00:22:46.702 "superblock": true, 00:22:46.702 "num_base_bdevs": 3, 00:22:46.702 "num_base_bdevs_discovered": 1, 00:22:46.702 "num_base_bdevs_operational": 3, 00:22:46.702 "base_bdevs_list": [ 00:22:46.702 { 00:22:46.702 "name": "BaseBdev1", 00:22:46.702 "uuid": "05c9a65c-e939-41ab-a430-09642e2f50e7", 00:22:46.702 "is_configured": true, 00:22:46.702 "data_offset": 2048, 00:22:46.702 "data_size": 63488 00:22:46.702 }, 00:22:46.702 { 00:22:46.702 "name": "BaseBdev2", 00:22:46.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.702 "is_configured": false, 00:22:46.702 "data_offset": 0, 00:22:46.702 "data_size": 0 00:22:46.702 }, 00:22:46.702 { 00:22:46.702 "name": "BaseBdev3", 00:22:46.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.702 "is_configured": false, 00:22:46.702 "data_offset": 0, 00:22:46.702 "data_size": 0 00:22:46.702 } 00:22:46.702 ] 00:22:46.702 }' 00:22:46.702 05:20:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:46.702 05:20:05 -- common/autotest_common.sh@10 -- # set +x 00:22:46.974 05:20:05 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:47.234 [2024-07-26 05:20:06.223314] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:47.234 [2024-07-26 05:20:06.223365] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:22:47.234 05:20:06 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:22:47.234 05:20:06 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:47.493 05:20:06 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:47.751 BaseBdev1 00:22:47.751 05:20:06 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:22:47.751 05:20:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:47.751 05:20:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:47.751 05:20:06 -- common/autotest_common.sh@889 -- # local i 00:22:47.751 05:20:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:47.751 05:20:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:47.751 05:20:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:48.010 05:20:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:48.010 [ 00:22:48.010 { 00:22:48.010 "name": "BaseBdev1", 00:22:48.010 "aliases": [ 00:22:48.010 "bb102c03-2fc4-4a34-be2e-1b322ff09ccd" 00:22:48.010 ], 00:22:48.010 "product_name": "Malloc disk", 00:22:48.010 "block_size": 512, 00:22:48.010 "num_blocks": 65536, 00:22:48.010 "uuid": "bb102c03-2fc4-4a34-be2e-1b322ff09ccd", 00:22:48.010 "assigned_rate_limits": { 00:22:48.010 "rw_ios_per_sec": 0, 00:22:48.010 "rw_mbytes_per_sec": 0, 00:22:48.010 "r_mbytes_per_sec": 0, 00:22:48.010 "w_mbytes_per_sec": 0 00:22:48.010 }, 00:22:48.010 "claimed": false, 00:22:48.010 "zoned": false, 00:22:48.010 "supported_io_types": { 00:22:48.010 "read": true, 00:22:48.010 "write": true, 00:22:48.010 "unmap": true, 00:22:48.010 "write_zeroes": true, 00:22:48.010 "flush": true, 00:22:48.010 "reset": true, 00:22:48.010 "compare": false, 00:22:48.010 "compare_and_write": false, 00:22:48.010 "abort": true, 00:22:48.010 "nvme_admin": false, 00:22:48.010 "nvme_io": false 00:22:48.010 }, 00:22:48.010 "memory_domains": [ 00:22:48.010 { 00:22:48.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.010 "dma_device_type": 2 00:22:48.010 } 00:22:48.010 ], 00:22:48.010 "driver_specific": {} 00:22:48.010 } 00:22:48.010 ] 00:22:48.010 05:20:07 -- common/autotest_common.sh@895 -- # return 0 00:22:48.010 05:20:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:48.268 [2024-07-26 05:20:07.241955] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:48.268 [2024-07-26 05:20:07.243812] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:48.268 [2024-07-26 05:20:07.243863] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:48.268 [2024-07-26 05:20:07.243894] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:48.268 [2024-07-26 05:20:07.243908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:48.268 05:20:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:48.268 05:20:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:48.268 05:20:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:48.268 05:20:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:48.268 05:20:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:48.268 05:20:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:48.268 05:20:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:48.268 05:20:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:48.269 05:20:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:48.269 05:20:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:48.269 05:20:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:48.269 05:20:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:48.269 05:20:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.269 05:20:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.527 05:20:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.527 "name": "Existed_Raid", 00:22:48.527 "uuid": "be925ba4-28ab-450e-b29d-667df3d84ab5", 00:22:48.527 "strip_size_kb": 64, 00:22:48.527 "state": "configuring", 00:22:48.527 "raid_level": "raid5f", 00:22:48.527 "superblock": true, 00:22:48.527 "num_base_bdevs": 3, 00:22:48.527 "num_base_bdevs_discovered": 1, 00:22:48.527 "num_base_bdevs_operational": 3, 00:22:48.527 "base_bdevs_list": [ 00:22:48.527 { 00:22:48.527 "name": "BaseBdev1", 00:22:48.527 "uuid": "bb102c03-2fc4-4a34-be2e-1b322ff09ccd", 00:22:48.527 "is_configured": true, 00:22:48.527 "data_offset": 2048, 00:22:48.527 "data_size": 63488 00:22:48.527 }, 00:22:48.527 { 00:22:48.527 "name": "BaseBdev2", 00:22:48.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.527 "is_configured": false, 00:22:48.527 "data_offset": 0, 00:22:48.527 "data_size": 0 00:22:48.527 }, 00:22:48.527 { 00:22:48.527 "name": "BaseBdev3", 00:22:48.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.527 "is_configured": false, 00:22:48.527 "data_offset": 0, 00:22:48.527 "data_size": 0 00:22:48.527 } 00:22:48.527 ] 00:22:48.527 }' 00:22:48.527 05:20:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.527 05:20:07 -- common/autotest_common.sh@10 -- # set +x 00:22:48.785 05:20:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:49.044 [2024-07-26 05:20:07.931867] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.044 BaseBdev2 00:22:49.044 05:20:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:49.044 05:20:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:22:49.044 05:20:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:49.045 05:20:07 -- common/autotest_common.sh@889 -- # local i 00:22:49.045 05:20:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:49.045 05:20:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:49.045 05:20:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:49.045 05:20:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:49.304 [ 00:22:49.304 { 00:22:49.304 "name": "BaseBdev2", 00:22:49.304 "aliases": [ 00:22:49.304 "9c29276e-9bc2-45f5-a714-79507dc8d04a" 00:22:49.304 ], 00:22:49.304 "product_name": "Malloc disk", 00:22:49.304 "block_size": 512, 00:22:49.304 "num_blocks": 65536, 00:22:49.304 "uuid": "9c29276e-9bc2-45f5-a714-79507dc8d04a", 00:22:49.304 "assigned_rate_limits": { 00:22:49.304 "rw_ios_per_sec": 0, 00:22:49.304 "rw_mbytes_per_sec": 0, 00:22:49.304 "r_mbytes_per_sec": 0, 00:22:49.304 "w_mbytes_per_sec": 0 00:22:49.304 }, 00:22:49.304 "claimed": true, 00:22:49.304 "claim_type": "exclusive_write", 00:22:49.304 "zoned": false, 00:22:49.304 "supported_io_types": { 00:22:49.304 "read": true, 00:22:49.304 "write": true, 00:22:49.304 "unmap": true, 00:22:49.304 "write_zeroes": true, 00:22:49.304 "flush": true, 00:22:49.304 "reset": true, 00:22:49.304 "compare": false, 00:22:49.304 "compare_and_write": false, 00:22:49.304 "abort": true, 00:22:49.304 "nvme_admin": false, 00:22:49.304 "nvme_io": false 00:22:49.304 }, 00:22:49.304 "memory_domains": [ 00:22:49.304 { 00:22:49.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.304 "dma_device_type": 2 00:22:49.304 } 00:22:49.304 ], 00:22:49.304 "driver_specific": {} 00:22:49.304 } 00:22:49.304 ] 00:22:49.304 05:20:08 -- common/autotest_common.sh@895 -- # return 0 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.304 05:20:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.563 05:20:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:49.563 "name": "Existed_Raid", 00:22:49.563 "uuid": "be925ba4-28ab-450e-b29d-667df3d84ab5", 00:22:49.563 "strip_size_kb": 64, 00:22:49.563 "state": "configuring", 00:22:49.563 "raid_level": "raid5f", 00:22:49.563 "superblock": true, 00:22:49.563 "num_base_bdevs": 3, 00:22:49.563 "num_base_bdevs_discovered": 2, 00:22:49.563 "num_base_bdevs_operational": 3, 00:22:49.563 "base_bdevs_list": [ 00:22:49.563 { 00:22:49.563 "name": "BaseBdev1", 00:22:49.563 "uuid": "bb102c03-2fc4-4a34-be2e-1b322ff09ccd", 00:22:49.563 "is_configured": true, 00:22:49.563 "data_offset": 2048, 00:22:49.563 "data_size": 63488 00:22:49.563 }, 00:22:49.563 { 00:22:49.563 "name": "BaseBdev2", 00:22:49.563 "uuid": "9c29276e-9bc2-45f5-a714-79507dc8d04a", 00:22:49.563 "is_configured": true, 00:22:49.563 "data_offset": 2048, 00:22:49.563 "data_size": 63488 00:22:49.563 }, 00:22:49.563 { 00:22:49.563 "name": "BaseBdev3", 00:22:49.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.563 "is_configured": false, 00:22:49.563 "data_offset": 0, 00:22:49.563 "data_size": 0 00:22:49.563 } 00:22:49.563 ] 00:22:49.563 }' 00:22:49.563 05:20:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:49.563 05:20:08 -- common/autotest_common.sh@10 -- # set +x 00:22:49.822 05:20:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:50.081 [2024-07-26 05:20:09.156109] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:50.081 [2024-07-26 05:20:09.156343] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:22:50.081 [2024-07-26 05:20:09.156367] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:50.081 [2024-07-26 05:20:09.156480] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:22:50.081 BaseBdev3 00:22:50.081 [2024-07-26 05:20:09.161156] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:22:50.081 [2024-07-26 05:20:09.161327] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:22:50.081 [2024-07-26 05:20:09.161523] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.081 05:20:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:50.081 05:20:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:22:50.081 05:20:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:50.081 05:20:09 -- common/autotest_common.sh@889 -- # local i 00:22:50.081 05:20:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:50.081 05:20:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:50.081 05:20:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:50.353 05:20:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:50.626 [ 00:22:50.626 { 00:22:50.626 "name": "BaseBdev3", 00:22:50.626 "aliases": [ 00:22:50.626 "9b89756d-f635-447b-9eaa-9d1c940455e1" 00:22:50.626 ], 00:22:50.626 "product_name": "Malloc disk", 00:22:50.626 "block_size": 512, 00:22:50.626 "num_blocks": 65536, 00:22:50.626 "uuid": "9b89756d-f635-447b-9eaa-9d1c940455e1", 00:22:50.626 "assigned_rate_limits": { 00:22:50.626 "rw_ios_per_sec": 0, 00:22:50.626 "rw_mbytes_per_sec": 0, 00:22:50.626 "r_mbytes_per_sec": 0, 00:22:50.626 "w_mbytes_per_sec": 0 00:22:50.626 }, 00:22:50.626 "claimed": true, 00:22:50.626 "claim_type": "exclusive_write", 00:22:50.626 "zoned": false, 00:22:50.626 "supported_io_types": { 00:22:50.626 "read": true, 00:22:50.626 "write": true, 00:22:50.626 "unmap": true, 00:22:50.626 "write_zeroes": true, 00:22:50.626 "flush": true, 00:22:50.626 "reset": true, 00:22:50.626 "compare": false, 00:22:50.626 "compare_and_write": false, 00:22:50.626 "abort": true, 00:22:50.626 "nvme_admin": false, 00:22:50.626 "nvme_io": false 00:22:50.626 }, 00:22:50.626 "memory_domains": [ 00:22:50.626 { 00:22:50.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.626 "dma_device_type": 2 00:22:50.626 } 00:22:50.626 ], 00:22:50.626 "driver_specific": {} 00:22:50.626 } 00:22:50.626 ] 00:22:50.626 05:20:09 -- common/autotest_common.sh@895 -- # return 0 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.626 05:20:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.885 05:20:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:50.885 "name": "Existed_Raid", 00:22:50.885 "uuid": "be925ba4-28ab-450e-b29d-667df3d84ab5", 00:22:50.885 "strip_size_kb": 64, 00:22:50.885 "state": "online", 00:22:50.885 "raid_level": "raid5f", 00:22:50.885 "superblock": true, 00:22:50.885 "num_base_bdevs": 3, 00:22:50.885 "num_base_bdevs_discovered": 3, 00:22:50.885 "num_base_bdevs_operational": 3, 00:22:50.885 "base_bdevs_list": [ 00:22:50.885 { 00:22:50.885 "name": "BaseBdev1", 00:22:50.885 "uuid": "bb102c03-2fc4-4a34-be2e-1b322ff09ccd", 00:22:50.885 "is_configured": true, 00:22:50.885 "data_offset": 2048, 00:22:50.885 "data_size": 63488 00:22:50.885 }, 00:22:50.885 { 00:22:50.885 "name": "BaseBdev2", 00:22:50.885 "uuid": "9c29276e-9bc2-45f5-a714-79507dc8d04a", 00:22:50.885 "is_configured": true, 00:22:50.885 "data_offset": 2048, 00:22:50.885 "data_size": 63488 00:22:50.885 }, 00:22:50.885 { 00:22:50.885 "name": "BaseBdev3", 00:22:50.885 "uuid": "9b89756d-f635-447b-9eaa-9d1c940455e1", 00:22:50.885 "is_configured": true, 00:22:50.885 "data_offset": 2048, 00:22:50.885 "data_size": 63488 00:22:50.885 } 00:22:50.885 ] 00:22:50.885 }' 00:22:50.885 05:20:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:50.885 05:20:09 -- common/autotest_common.sh@10 -- # set +x 00:22:51.144 05:20:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:51.403 [2024-07-26 05:20:10.338173] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.403 05:20:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.662 05:20:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:51.662 "name": "Existed_Raid", 00:22:51.662 "uuid": "be925ba4-28ab-450e-b29d-667df3d84ab5", 00:22:51.662 "strip_size_kb": 64, 00:22:51.662 "state": "online", 00:22:51.662 "raid_level": "raid5f", 00:22:51.662 "superblock": true, 00:22:51.662 "num_base_bdevs": 3, 00:22:51.662 "num_base_bdevs_discovered": 2, 00:22:51.662 "num_base_bdevs_operational": 2, 00:22:51.662 "base_bdevs_list": [ 00:22:51.662 { 00:22:51.662 "name": null, 00:22:51.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.662 "is_configured": false, 00:22:51.662 "data_offset": 2048, 00:22:51.662 "data_size": 63488 00:22:51.662 }, 00:22:51.662 { 00:22:51.662 "name": "BaseBdev2", 00:22:51.662 "uuid": "9c29276e-9bc2-45f5-a714-79507dc8d04a", 00:22:51.662 "is_configured": true, 00:22:51.662 "data_offset": 2048, 00:22:51.662 "data_size": 63488 00:22:51.662 }, 00:22:51.662 { 00:22:51.662 "name": "BaseBdev3", 00:22:51.662 "uuid": "9b89756d-f635-447b-9eaa-9d1c940455e1", 00:22:51.662 "is_configured": true, 00:22:51.662 "data_offset": 2048, 00:22:51.662 "data_size": 63488 00:22:51.662 } 00:22:51.662 ] 00:22:51.662 }' 00:22:51.662 05:20:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:51.662 05:20:10 -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 05:20:10 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:51.921 05:20:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:51.921 05:20:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.921 05:20:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:52.179 05:20:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:52.179 05:20:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:52.179 05:20:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:52.437 [2024-07-26 05:20:11.312725] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:52.437 [2024-07-26 05:20:11.312908] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:52.437 [2024-07-26 05:20:11.312992] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:52.437 05:20:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:52.437 05:20:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:52.437 05:20:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:52.437 05:20:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.696 05:20:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:52.696 05:20:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:52.696 05:20:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:52.696 [2024-07-26 05:20:11.802951] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:52.696 [2024-07-26 05:20:11.803286] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:22:52.955 05:20:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:52.955 05:20:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:52.955 05:20:11 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.955 05:20:11 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:53.214 05:20:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:53.214 05:20:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:53.214 05:20:12 -- bdev/bdev_raid.sh@287 -- # killprocess 82587 00:22:53.214 05:20:12 -- common/autotest_common.sh@926 -- # '[' -z 82587 ']' 00:22:53.214 05:20:12 -- common/autotest_common.sh@930 -- # kill -0 82587 00:22:53.214 05:20:12 -- common/autotest_common.sh@931 -- # uname 00:22:53.214 05:20:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:53.214 05:20:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82587 00:22:53.214 killing process with pid 82587 00:22:53.214 05:20:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:53.214 05:20:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:53.214 05:20:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82587' 00:22:53.214 05:20:12 -- common/autotest_common.sh@945 -- # kill 82587 00:22:53.214 05:20:12 -- common/autotest_common.sh@950 -- # wait 82587 00:22:53.214 [2024-07-26 05:20:12.156021] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:53.214 [2024-07-26 05:20:12.156209] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:54.151 00:22:54.151 real 0m10.330s 00:22:54.151 user 0m17.261s 00:22:54.151 sys 0m1.508s 00:22:54.151 05:20:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:54.151 ************************************ 00:22:54.151 END TEST raid5f_state_function_test_sb 00:22:54.151 ************************************ 00:22:54.151 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:22:54.151 05:20:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:22:54.151 05:20:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:54.151 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:22:54.151 ************************************ 00:22:54.151 START TEST raid5f_superblock_test 00:22:54.151 ************************************ 00:22:54.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:54.151 05:20:13 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:22:54.151 05:20:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:22:54.152 05:20:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:22:54.152 05:20:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:22:54.152 05:20:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:22:54.152 05:20:13 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:22:54.152 05:20:13 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:22:54.152 05:20:13 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:22:54.152 05:20:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=82935 00:22:54.152 05:20:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 82935 /var/tmp/spdk-raid.sock 00:22:54.152 05:20:13 -- common/autotest_common.sh@819 -- # '[' -z 82935 ']' 00:22:54.152 05:20:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:54.152 05:20:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:54.152 05:20:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:54.152 05:20:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:54.152 05:20:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:54.152 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:22:54.152 [2024-07-26 05:20:13.179925] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:54.152 [2024-07-26 05:20:13.180316] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82935 ] 00:22:54.410 [2024-07-26 05:20:13.349804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.410 [2024-07-26 05:20:13.496278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.668 [2024-07-26 05:20:13.640197] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:55.235 05:20:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:55.235 05:20:14 -- common/autotest_common.sh@852 -- # return 0 00:22:55.235 05:20:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:22:55.235 05:20:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:55.235 05:20:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:22:55.235 05:20:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:22:55.235 05:20:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:55.235 05:20:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:55.235 05:20:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:55.235 05:20:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:55.235 05:20:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:55.235 malloc1 00:22:55.235 05:20:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:55.493 [2024-07-26 05:20:14.521453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:55.493 [2024-07-26 05:20:14.521527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.493 [2024-07-26 05:20:14.521563] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:22:55.493 [2024-07-26 05:20:14.521576] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.493 [2024-07-26 05:20:14.523837] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.493 [2024-07-26 05:20:14.523877] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:55.493 pt1 00:22:55.493 05:20:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:55.493 05:20:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:55.493 05:20:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:22:55.493 05:20:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:22:55.493 05:20:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:55.494 05:20:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:55.494 05:20:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:55.494 05:20:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:55.494 05:20:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:55.752 malloc2 00:22:55.753 05:20:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:56.011 [2024-07-26 05:20:14.995575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:56.011 [2024-07-26 05:20:14.995652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.011 [2024-07-26 05:20:14.995682] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:22:56.011 [2024-07-26 05:20:14.995695] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.011 [2024-07-26 05:20:14.997821] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.011 [2024-07-26 05:20:14.997860] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:56.011 pt2 00:22:56.011 05:20:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:56.011 05:20:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:56.011 05:20:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:22:56.011 05:20:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:22:56.011 05:20:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:56.011 05:20:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:56.011 05:20:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:56.011 05:20:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:56.011 05:20:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:56.270 malloc3 00:22:56.270 05:20:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:56.529 [2024-07-26 05:20:15.435933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:56.529 [2024-07-26 05:20:15.435990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.529 [2024-07-26 05:20:15.436050] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:22:56.529 [2024-07-26 05:20:15.436066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.529 [2024-07-26 05:20:15.438164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.529 [2024-07-26 05:20:15.438213] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:56.529 pt3 00:22:56.529 05:20:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:56.529 05:20:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:56.529 05:20:15 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:22:56.529 [2024-07-26 05:20:15.624023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:56.529 [2024-07-26 05:20:15.625792] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:56.529 [2024-07-26 05:20:15.625863] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:56.529 [2024-07-26 05:20:15.626064] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:22:56.529 [2024-07-26 05:20:15.626083] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:56.529 [2024-07-26 05:20:15.626215] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:22:56.529 [2024-07-26 05:20:15.630696] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:22:56.529 [2024-07-26 05:20:15.630874] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:22:56.529 [2024-07-26 05:20:15.631175] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:56.787 "name": "raid_bdev1", 00:22:56.787 "uuid": "6a9ecd61-a461-46bd-a4f4-77119c8efcbf", 00:22:56.787 "strip_size_kb": 64, 00:22:56.787 "state": "online", 00:22:56.787 "raid_level": "raid5f", 00:22:56.787 "superblock": true, 00:22:56.787 "num_base_bdevs": 3, 00:22:56.787 "num_base_bdevs_discovered": 3, 00:22:56.787 "num_base_bdevs_operational": 3, 00:22:56.787 "base_bdevs_list": [ 00:22:56.787 { 00:22:56.787 "name": "pt1", 00:22:56.787 "uuid": "651f9096-42a0-53ae-b7ac-aab7ea068e3f", 00:22:56.787 "is_configured": true, 00:22:56.787 "data_offset": 2048, 00:22:56.787 "data_size": 63488 00:22:56.787 }, 00:22:56.787 { 00:22:56.787 "name": "pt2", 00:22:56.787 "uuid": "06f4c2e9-54be-5085-8052-baa13da3c968", 00:22:56.787 "is_configured": true, 00:22:56.787 "data_offset": 2048, 00:22:56.787 "data_size": 63488 00:22:56.787 }, 00:22:56.787 { 00:22:56.787 "name": "pt3", 00:22:56.787 "uuid": "ddbfd7ec-f09b-57d1-8e1f-834b9ad27b5a", 00:22:56.787 "is_configured": true, 00:22:56.787 "data_offset": 2048, 00:22:56.787 "data_size": 63488 00:22:56.787 } 00:22:56.787 ] 00:22:56.787 }' 00:22:56.787 05:20:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:56.787 05:20:15 -- common/autotest_common.sh@10 -- # set +x 00:22:57.047 05:20:16 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:22:57.047 05:20:16 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:57.306 [2024-07-26 05:20:16.348063] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.306 05:20:16 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=6a9ecd61-a461-46bd-a4f4-77119c8efcbf 00:22:57.306 05:20:16 -- bdev/bdev_raid.sh@380 -- # '[' -z 6a9ecd61-a461-46bd-a4f4-77119c8efcbf ']' 00:22:57.306 05:20:16 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:57.564 [2024-07-26 05:20:16.595938] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:57.564 [2024-07-26 05:20:16.595963] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:57.564 [2024-07-26 05:20:16.596052] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:57.565 [2024-07-26 05:20:16.596147] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:57.565 [2024-07-26 05:20:16.596164] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:22:57.565 05:20:16 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.565 05:20:16 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:22:57.823 05:20:16 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:22:57.823 05:20:16 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:22:57.823 05:20:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:57.824 05:20:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:58.082 05:20:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:58.082 05:20:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:58.082 05:20:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:58.082 05:20:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:58.341 05:20:17 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:58.341 05:20:17 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:58.599 05:20:17 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:58.599 05:20:17 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:58.599 05:20:17 -- common/autotest_common.sh@640 -- # local es=0 00:22:58.599 05:20:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:58.599 05:20:17 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.599 05:20:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:58.599 05:20:17 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.599 05:20:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:58.599 05:20:17 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.599 05:20:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:58.599 05:20:17 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.599 05:20:17 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:58.599 05:20:17 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:58.858 [2024-07-26 05:20:17.816214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:58.858 [2024-07-26 05:20:17.818063] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:58.858 [2024-07-26 05:20:17.818265] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:58.858 [2024-07-26 05:20:17.818341] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:58.858 [2024-07-26 05:20:17.818397] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:58.858 [2024-07-26 05:20:17.818427] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:58.858 [2024-07-26 05:20:17.818447] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:58.858 [2024-07-26 05:20:17.818461] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:22:58.858 request: 00:22:58.858 { 00:22:58.858 "name": "raid_bdev1", 00:22:58.858 "raid_level": "raid5f", 00:22:58.858 "base_bdevs": [ 00:22:58.858 "malloc1", 00:22:58.858 "malloc2", 00:22:58.858 "malloc3" 00:22:58.858 ], 00:22:58.858 "superblock": false, 00:22:58.858 "strip_size_kb": 64, 00:22:58.858 "method": "bdev_raid_create", 00:22:58.858 "req_id": 1 00:22:58.858 } 00:22:58.858 Got JSON-RPC error response 00:22:58.858 response: 00:22:58.858 { 00:22:58.858 "code": -17, 00:22:58.858 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:58.858 } 00:22:58.858 05:20:17 -- common/autotest_common.sh@643 -- # es=1 00:22:58.858 05:20:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:58.858 05:20:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:58.858 05:20:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:58.858 05:20:17 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.858 05:20:17 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:59.117 [2024-07-26 05:20:18.196260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:59.117 [2024-07-26 05:20:18.196345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.117 [2024-07-26 05:20:18.196372] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:22:59.117 [2024-07-26 05:20:18.196386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.117 [2024-07-26 05:20:18.198454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.117 [2024-07-26 05:20:18.198623] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:59.117 [2024-07-26 05:20:18.198778] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:59.117 [2024-07-26 05:20:18.198847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:59.117 pt1 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.117 05:20:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.376 05:20:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:59.376 "name": "raid_bdev1", 00:22:59.376 "uuid": "6a9ecd61-a461-46bd-a4f4-77119c8efcbf", 00:22:59.376 "strip_size_kb": 64, 00:22:59.376 "state": "configuring", 00:22:59.376 "raid_level": "raid5f", 00:22:59.376 "superblock": true, 00:22:59.376 "num_base_bdevs": 3, 00:22:59.376 "num_base_bdevs_discovered": 1, 00:22:59.376 "num_base_bdevs_operational": 3, 00:22:59.376 "base_bdevs_list": [ 00:22:59.376 { 00:22:59.376 "name": "pt1", 00:22:59.376 "uuid": "651f9096-42a0-53ae-b7ac-aab7ea068e3f", 00:22:59.376 "is_configured": true, 00:22:59.376 "data_offset": 2048, 00:22:59.376 "data_size": 63488 00:22:59.376 }, 00:22:59.376 { 00:22:59.376 "name": null, 00:22:59.376 "uuid": "06f4c2e9-54be-5085-8052-baa13da3c968", 00:22:59.376 "is_configured": false, 00:22:59.376 "data_offset": 2048, 00:22:59.376 "data_size": 63488 00:22:59.376 }, 00:22:59.376 { 00:22:59.376 "name": null, 00:22:59.376 "uuid": "ddbfd7ec-f09b-57d1-8e1f-834b9ad27b5a", 00:22:59.376 "is_configured": false, 00:22:59.376 "data_offset": 2048, 00:22:59.376 "data_size": 63488 00:22:59.376 } 00:22:59.376 ] 00:22:59.376 }' 00:22:59.376 05:20:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:59.376 05:20:18 -- common/autotest_common.sh@10 -- # set +x 00:22:59.635 05:20:18 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:22:59.635 05:20:18 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:59.893 [2024-07-26 05:20:18.920470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:59.893 [2024-07-26 05:20:18.920539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.893 [2024-07-26 05:20:18.920567] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:22:59.893 [2024-07-26 05:20:18.920581] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.893 [2024-07-26 05:20:18.920988] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.893 [2024-07-26 05:20:18.921054] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:59.894 [2024-07-26 05:20:18.921158] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:59.894 [2024-07-26 05:20:18.921187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:59.894 pt2 00:22:59.894 05:20:18 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:00.153 [2024-07-26 05:20:19.112569] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.153 05:20:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.412 05:20:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.412 "name": "raid_bdev1", 00:23:00.412 "uuid": "6a9ecd61-a461-46bd-a4f4-77119c8efcbf", 00:23:00.412 "strip_size_kb": 64, 00:23:00.412 "state": "configuring", 00:23:00.412 "raid_level": "raid5f", 00:23:00.412 "superblock": true, 00:23:00.412 "num_base_bdevs": 3, 00:23:00.412 "num_base_bdevs_discovered": 1, 00:23:00.412 "num_base_bdevs_operational": 3, 00:23:00.412 "base_bdevs_list": [ 00:23:00.412 { 00:23:00.412 "name": "pt1", 00:23:00.412 "uuid": "651f9096-42a0-53ae-b7ac-aab7ea068e3f", 00:23:00.412 "is_configured": true, 00:23:00.412 "data_offset": 2048, 00:23:00.412 "data_size": 63488 00:23:00.412 }, 00:23:00.412 { 00:23:00.412 "name": null, 00:23:00.412 "uuid": "06f4c2e9-54be-5085-8052-baa13da3c968", 00:23:00.412 "is_configured": false, 00:23:00.412 "data_offset": 2048, 00:23:00.412 "data_size": 63488 00:23:00.412 }, 00:23:00.412 { 00:23:00.412 "name": null, 00:23:00.412 "uuid": "ddbfd7ec-f09b-57d1-8e1f-834b9ad27b5a", 00:23:00.412 "is_configured": false, 00:23:00.412 "data_offset": 2048, 00:23:00.412 "data_size": 63488 00:23:00.412 } 00:23:00.412 ] 00:23:00.412 }' 00:23:00.412 05:20:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.412 05:20:19 -- common/autotest_common.sh@10 -- # set +x 00:23:00.671 05:20:19 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:00.671 05:20:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:00.671 05:20:19 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:00.930 [2024-07-26 05:20:19.844697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:00.930 [2024-07-26 05:20:19.844765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.930 [2024-07-26 05:20:19.844794] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:23:00.930 [2024-07-26 05:20:19.844807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.930 [2024-07-26 05:20:19.845338] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.930 [2024-07-26 05:20:19.845370] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:00.930 [2024-07-26 05:20:19.845494] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:00.930 [2024-07-26 05:20:19.845520] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:00.930 pt2 00:23:00.930 05:20:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:00.930 05:20:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:00.930 05:20:19 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:01.189 [2024-07-26 05:20:20.104781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:01.189 [2024-07-26 05:20:20.105056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.189 [2024-07-26 05:20:20.105127] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:23:01.189 [2024-07-26 05:20:20.105245] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.189 [2024-07-26 05:20:20.105715] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.189 [2024-07-26 05:20:20.105875] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:01.189 [2024-07-26 05:20:20.106103] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:01.189 [2024-07-26 05:20:20.106231] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:01.189 [2024-07-26 05:20:20.106510] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:23:01.189 [2024-07-26 05:20:20.106619] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:01.189 [2024-07-26 05:20:20.106814] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:23:01.189 [2024-07-26 05:20:20.110850] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:23:01.189 [2024-07-26 05:20:20.111073] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:23:01.189 [2024-07-26 05:20:20.111415] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.189 pt3 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.189 05:20:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.448 05:20:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:01.448 "name": "raid_bdev1", 00:23:01.448 "uuid": "6a9ecd61-a461-46bd-a4f4-77119c8efcbf", 00:23:01.448 "strip_size_kb": 64, 00:23:01.448 "state": "online", 00:23:01.448 "raid_level": "raid5f", 00:23:01.448 "superblock": true, 00:23:01.448 "num_base_bdevs": 3, 00:23:01.448 "num_base_bdevs_discovered": 3, 00:23:01.448 "num_base_bdevs_operational": 3, 00:23:01.448 "base_bdevs_list": [ 00:23:01.448 { 00:23:01.448 "name": "pt1", 00:23:01.448 "uuid": "651f9096-42a0-53ae-b7ac-aab7ea068e3f", 00:23:01.448 "is_configured": true, 00:23:01.448 "data_offset": 2048, 00:23:01.448 "data_size": 63488 00:23:01.448 }, 00:23:01.448 { 00:23:01.448 "name": "pt2", 00:23:01.448 "uuid": "06f4c2e9-54be-5085-8052-baa13da3c968", 00:23:01.448 "is_configured": true, 00:23:01.448 "data_offset": 2048, 00:23:01.448 "data_size": 63488 00:23:01.448 }, 00:23:01.448 { 00:23:01.448 "name": "pt3", 00:23:01.448 "uuid": "ddbfd7ec-f09b-57d1-8e1f-834b9ad27b5a", 00:23:01.448 "is_configured": true, 00:23:01.448 "data_offset": 2048, 00:23:01.448 "data_size": 63488 00:23:01.448 } 00:23:01.448 ] 00:23:01.448 }' 00:23:01.448 05:20:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:01.448 05:20:20 -- common/autotest_common.sh@10 -- # set +x 00:23:01.707 05:20:20 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:01.707 05:20:20 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:01.707 [2024-07-26 05:20:20.748441] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:01.707 05:20:20 -- bdev/bdev_raid.sh@430 -- # '[' 6a9ecd61-a461-46bd-a4f4-77119c8efcbf '!=' 6a9ecd61-a461-46bd-a4f4-77119c8efcbf ']' 00:23:01.707 05:20:20 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:01.707 05:20:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:01.707 05:20:20 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:01.707 05:20:20 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:01.966 [2024-07-26 05:20:20.992370] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.967 05:20:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.226 05:20:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:02.226 "name": "raid_bdev1", 00:23:02.226 "uuid": "6a9ecd61-a461-46bd-a4f4-77119c8efcbf", 00:23:02.226 "strip_size_kb": 64, 00:23:02.226 "state": "online", 00:23:02.226 "raid_level": "raid5f", 00:23:02.226 "superblock": true, 00:23:02.226 "num_base_bdevs": 3, 00:23:02.226 "num_base_bdevs_discovered": 2, 00:23:02.226 "num_base_bdevs_operational": 2, 00:23:02.226 "base_bdevs_list": [ 00:23:02.226 { 00:23:02.226 "name": null, 00:23:02.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.226 "is_configured": false, 00:23:02.226 "data_offset": 2048, 00:23:02.226 "data_size": 63488 00:23:02.226 }, 00:23:02.226 { 00:23:02.226 "name": "pt2", 00:23:02.226 "uuid": "06f4c2e9-54be-5085-8052-baa13da3c968", 00:23:02.226 "is_configured": true, 00:23:02.226 "data_offset": 2048, 00:23:02.226 "data_size": 63488 00:23:02.226 }, 00:23:02.226 { 00:23:02.226 "name": "pt3", 00:23:02.226 "uuid": "ddbfd7ec-f09b-57d1-8e1f-834b9ad27b5a", 00:23:02.226 "is_configured": true, 00:23:02.226 "data_offset": 2048, 00:23:02.226 "data_size": 63488 00:23:02.226 } 00:23:02.226 ] 00:23:02.226 }' 00:23:02.226 05:20:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:02.226 05:20:21 -- common/autotest_common.sh@10 -- # set +x 00:23:02.485 05:20:21 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:02.744 [2024-07-26 05:20:21.656608] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:02.744 [2024-07-26 05:20:21.656638] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:02.744 [2024-07-26 05:20:21.656710] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:02.744 [2024-07-26 05:20:21.656773] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:02.744 [2024-07-26 05:20:21.656789] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:23:02.744 05:20:21 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:02.744 05:20:21 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.003 05:20:21 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:03.003 05:20:21 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:03.003 05:20:21 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:03.003 05:20:21 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:03.003 05:20:21 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:03.261 05:20:22 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:03.261 05:20:22 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:03.261 05:20:22 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:03.521 [2024-07-26 05:20:22.580757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:03.521 [2024-07-26 05:20:22.580840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.521 [2024-07-26 05:20:22.580866] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:23:03.521 [2024-07-26 05:20:22.580883] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.521 [2024-07-26 05:20:22.583223] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.521 [2024-07-26 05:20:22.583267] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:03.521 [2024-07-26 05:20:22.583352] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:03.521 [2024-07-26 05:20:22.583403] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:03.521 pt2 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.521 05:20:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.780 05:20:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.780 "name": "raid_bdev1", 00:23:03.780 "uuid": "6a9ecd61-a461-46bd-a4f4-77119c8efcbf", 00:23:03.780 "strip_size_kb": 64, 00:23:03.780 "state": "configuring", 00:23:03.780 "raid_level": "raid5f", 00:23:03.780 "superblock": true, 00:23:03.780 "num_base_bdevs": 3, 00:23:03.780 "num_base_bdevs_discovered": 1, 00:23:03.780 "num_base_bdevs_operational": 2, 00:23:03.780 "base_bdevs_list": [ 00:23:03.780 { 00:23:03.780 "name": null, 00:23:03.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.780 "is_configured": false, 00:23:03.780 "data_offset": 2048, 00:23:03.780 "data_size": 63488 00:23:03.780 }, 00:23:03.780 { 00:23:03.780 "name": "pt2", 00:23:03.780 "uuid": "06f4c2e9-54be-5085-8052-baa13da3c968", 00:23:03.780 "is_configured": true, 00:23:03.780 "data_offset": 2048, 00:23:03.780 "data_size": 63488 00:23:03.780 }, 00:23:03.780 { 00:23:03.780 "name": null, 00:23:03.780 "uuid": "ddbfd7ec-f09b-57d1-8e1f-834b9ad27b5a", 00:23:03.780 "is_configured": false, 00:23:03.780 "data_offset": 2048, 00:23:03.780 "data_size": 63488 00:23:03.780 } 00:23:03.780 ] 00:23:03.780 }' 00:23:03.780 05:20:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.780 05:20:22 -- common/autotest_common.sh@10 -- # set +x 00:23:04.039 05:20:23 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:04.039 05:20:23 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:04.039 05:20:23 -- bdev/bdev_raid.sh@462 -- # i=2 00:23:04.039 05:20:23 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:04.325 [2024-07-26 05:20:23.284901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:04.325 [2024-07-26 05:20:23.284971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.325 [2024-07-26 05:20:23.285025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:23:04.325 [2024-07-26 05:20:23.285060] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.325 [2024-07-26 05:20:23.285573] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.325 [2024-07-26 05:20:23.285617] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:04.325 [2024-07-26 05:20:23.285707] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:04.325 [2024-07-26 05:20:23.285736] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:04.325 [2024-07-26 05:20:23.285929] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:23:04.325 [2024-07-26 05:20:23.285947] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:04.325 [2024-07-26 05:20:23.286028] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:23:04.325 pt3 00:23:04.325 [2024-07-26 05:20:23.290040] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:23:04.325 [2024-07-26 05:20:23.290062] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:23:04.325 [2024-07-26 05:20:23.290310] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.325 05:20:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.592 05:20:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:04.592 "name": "raid_bdev1", 00:23:04.592 "uuid": "6a9ecd61-a461-46bd-a4f4-77119c8efcbf", 00:23:04.592 "strip_size_kb": 64, 00:23:04.592 "state": "online", 00:23:04.592 "raid_level": "raid5f", 00:23:04.592 "superblock": true, 00:23:04.592 "num_base_bdevs": 3, 00:23:04.592 "num_base_bdevs_discovered": 2, 00:23:04.592 "num_base_bdevs_operational": 2, 00:23:04.592 "base_bdevs_list": [ 00:23:04.592 { 00:23:04.592 "name": null, 00:23:04.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.592 "is_configured": false, 00:23:04.592 "data_offset": 2048, 00:23:04.592 "data_size": 63488 00:23:04.592 }, 00:23:04.592 { 00:23:04.592 "name": "pt2", 00:23:04.592 "uuid": "06f4c2e9-54be-5085-8052-baa13da3c968", 00:23:04.592 "is_configured": true, 00:23:04.592 "data_offset": 2048, 00:23:04.592 "data_size": 63488 00:23:04.592 }, 00:23:04.592 { 00:23:04.592 "name": "pt3", 00:23:04.592 "uuid": "ddbfd7ec-f09b-57d1-8e1f-834b9ad27b5a", 00:23:04.592 "is_configured": true, 00:23:04.592 "data_offset": 2048, 00:23:04.592 "data_size": 63488 00:23:04.592 } 00:23:04.592 ] 00:23:04.592 }' 00:23:04.592 05:20:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:04.592 05:20:23 -- common/autotest_common.sh@10 -- # set +x 00:23:04.850 05:20:23 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:23:04.850 05:20:23 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:05.109 [2024-07-26 05:20:24.046637] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:05.109 [2024-07-26 05:20:24.046868] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:05.109 [2024-07-26 05:20:24.046957] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.109 [2024-07-26 05:20:24.047081] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:05.109 [2024-07-26 05:20:24.047098] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:23:05.109 05:20:24 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:05.109 05:20:24 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.367 05:20:24 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:05.368 05:20:24 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:05.368 05:20:24 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:05.627 [2024-07-26 05:20:24.490775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:05.627 [2024-07-26 05:20:24.491052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.627 [2024-07-26 05:20:24.491123] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:23:05.627 [2024-07-26 05:20:24.491390] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.627 [2024-07-26 05:20:24.493664] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.627 [2024-07-26 05:20:24.493838] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:05.627 [2024-07-26 05:20:24.494052] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:05.627 [2024-07-26 05:20:24.494217] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:05.627 pt1 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:05.627 "name": "raid_bdev1", 00:23:05.627 "uuid": "6a9ecd61-a461-46bd-a4f4-77119c8efcbf", 00:23:05.627 "strip_size_kb": 64, 00:23:05.627 "state": "configuring", 00:23:05.627 "raid_level": "raid5f", 00:23:05.627 "superblock": true, 00:23:05.627 "num_base_bdevs": 3, 00:23:05.627 "num_base_bdevs_discovered": 1, 00:23:05.627 "num_base_bdevs_operational": 3, 00:23:05.627 "base_bdevs_list": [ 00:23:05.627 { 00:23:05.627 "name": "pt1", 00:23:05.627 "uuid": "651f9096-42a0-53ae-b7ac-aab7ea068e3f", 00:23:05.627 "is_configured": true, 00:23:05.627 "data_offset": 2048, 00:23:05.627 "data_size": 63488 00:23:05.627 }, 00:23:05.627 { 00:23:05.627 "name": null, 00:23:05.627 "uuid": "06f4c2e9-54be-5085-8052-baa13da3c968", 00:23:05.627 "is_configured": false, 00:23:05.627 "data_offset": 2048, 00:23:05.627 "data_size": 63488 00:23:05.627 }, 00:23:05.627 { 00:23:05.627 "name": null, 00:23:05.627 "uuid": "ddbfd7ec-f09b-57d1-8e1f-834b9ad27b5a", 00:23:05.627 "is_configured": false, 00:23:05.627 "data_offset": 2048, 00:23:05.627 "data_size": 63488 00:23:05.627 } 00:23:05.627 ] 00:23:05.627 }' 00:23:05.627 05:20:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:05.627 05:20:24 -- common/autotest_common.sh@10 -- # set +x 00:23:05.886 05:20:24 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:05.886 05:20:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:05.886 05:20:24 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:06.145 05:20:25 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:06.145 05:20:25 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:06.145 05:20:25 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:06.404 05:20:25 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:06.404 05:20:25 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:06.404 05:20:25 -- bdev/bdev_raid.sh@489 -- # i=2 00:23:06.404 05:20:25 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:06.663 [2024-07-26 05:20:25.555076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:06.663 [2024-07-26 05:20:25.555134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.663 [2024-07-26 05:20:25.555162] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:23:06.663 [2024-07-26 05:20:25.555175] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.663 [2024-07-26 05:20:25.555614] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.663 [2024-07-26 05:20:25.555652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:06.663 [2024-07-26 05:20:25.555743] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:06.663 [2024-07-26 05:20:25.555758] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:06.663 [2024-07-26 05:20:25.555787] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:06.663 [2024-07-26 05:20:25.555825] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b780 name raid_bdev1, state configuring 00:23:06.663 [2024-07-26 05:20:25.555892] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:06.663 pt3 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.663 05:20:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.923 05:20:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:06.923 "name": "raid_bdev1", 00:23:06.923 "uuid": "6a9ecd61-a461-46bd-a4f4-77119c8efcbf", 00:23:06.923 "strip_size_kb": 64, 00:23:06.923 "state": "configuring", 00:23:06.923 "raid_level": "raid5f", 00:23:06.923 "superblock": true, 00:23:06.923 "num_base_bdevs": 3, 00:23:06.923 "num_base_bdevs_discovered": 1, 00:23:06.923 "num_base_bdevs_operational": 2, 00:23:06.923 "base_bdevs_list": [ 00:23:06.923 { 00:23:06.923 "name": null, 00:23:06.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.923 "is_configured": false, 00:23:06.923 "data_offset": 2048, 00:23:06.923 "data_size": 63488 00:23:06.923 }, 00:23:06.923 { 00:23:06.923 "name": null, 00:23:06.923 "uuid": "06f4c2e9-54be-5085-8052-baa13da3c968", 00:23:06.923 "is_configured": false, 00:23:06.923 "data_offset": 2048, 00:23:06.923 "data_size": 63488 00:23:06.923 }, 00:23:06.923 { 00:23:06.923 "name": "pt3", 00:23:06.923 "uuid": "ddbfd7ec-f09b-57d1-8e1f-834b9ad27b5a", 00:23:06.923 "is_configured": true, 00:23:06.923 "data_offset": 2048, 00:23:06.923 "data_size": 63488 00:23:06.923 } 00:23:06.923 ] 00:23:06.923 }' 00:23:06.923 05:20:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:06.923 05:20:25 -- common/autotest_common.sh@10 -- # set +x 00:23:07.181 05:20:26 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:07.181 05:20:26 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:07.181 05:20:26 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:07.441 [2024-07-26 05:20:26.337399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:07.441 [2024-07-26 05:20:26.337675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.441 [2024-07-26 05:20:26.337727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:23:07.441 [2024-07-26 05:20:26.337750] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.441 [2024-07-26 05:20:26.338403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.441 [2024-07-26 05:20:26.338452] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:07.441 [2024-07-26 05:20:26.338574] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:07.441 [2024-07-26 05:20:26.338633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:07.441 [2024-07-26 05:20:26.338818] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:23:07.441 [2024-07-26 05:20:26.338845] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:07.441 [2024-07-26 05:20:26.338963] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:23:07.441 [2024-07-26 05:20:26.345077] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:23:07.441 [2024-07-26 05:20:26.345109] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:23:07.441 [2024-07-26 05:20:26.345441] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.441 pt2 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.441 05:20:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.700 05:20:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.700 "name": "raid_bdev1", 00:23:07.700 "uuid": "6a9ecd61-a461-46bd-a4f4-77119c8efcbf", 00:23:07.700 "strip_size_kb": 64, 00:23:07.700 "state": "online", 00:23:07.700 "raid_level": "raid5f", 00:23:07.700 "superblock": true, 00:23:07.700 "num_base_bdevs": 3, 00:23:07.700 "num_base_bdevs_discovered": 2, 00:23:07.700 "num_base_bdevs_operational": 2, 00:23:07.700 "base_bdevs_list": [ 00:23:07.700 { 00:23:07.700 "name": null, 00:23:07.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.700 "is_configured": false, 00:23:07.700 "data_offset": 2048, 00:23:07.700 "data_size": 63488 00:23:07.700 }, 00:23:07.700 { 00:23:07.700 "name": "pt2", 00:23:07.700 "uuid": "06f4c2e9-54be-5085-8052-baa13da3c968", 00:23:07.700 "is_configured": true, 00:23:07.700 "data_offset": 2048, 00:23:07.700 "data_size": 63488 00:23:07.700 }, 00:23:07.700 { 00:23:07.700 "name": "pt3", 00:23:07.700 "uuid": "ddbfd7ec-f09b-57d1-8e1f-834b9ad27b5a", 00:23:07.700 "is_configured": true, 00:23:07.700 "data_offset": 2048, 00:23:07.700 "data_size": 63488 00:23:07.700 } 00:23:07.700 ] 00:23:07.700 }' 00:23:07.700 05:20:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.700 05:20:26 -- common/autotest_common.sh@10 -- # set +x 00:23:07.959 05:20:26 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:07.959 05:20:26 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:08.218 [2024-07-26 05:20:27.139883] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:08.218 05:20:27 -- bdev/bdev_raid.sh@506 -- # '[' 6a9ecd61-a461-46bd-a4f4-77119c8efcbf '!=' 6a9ecd61-a461-46bd-a4f4-77119c8efcbf ']' 00:23:08.218 05:20:27 -- bdev/bdev_raid.sh@511 -- # killprocess 82935 00:23:08.218 05:20:27 -- common/autotest_common.sh@926 -- # '[' -z 82935 ']' 00:23:08.218 05:20:27 -- common/autotest_common.sh@930 -- # kill -0 82935 00:23:08.218 05:20:27 -- common/autotest_common.sh@931 -- # uname 00:23:08.218 05:20:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:08.218 05:20:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82935 00:23:08.218 killing process with pid 82935 00:23:08.218 05:20:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:08.218 05:20:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:08.218 05:20:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82935' 00:23:08.218 05:20:27 -- common/autotest_common.sh@945 -- # kill 82935 00:23:08.218 [2024-07-26 05:20:27.191441] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:08.218 05:20:27 -- common/autotest_common.sh@950 -- # wait 82935 00:23:08.218 [2024-07-26 05:20:27.191503] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:08.218 [2024-07-26 05:20:27.191563] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:08.218 [2024-07-26 05:20:27.191575] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:23:08.476 [2024-07-26 05:20:27.383686] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:09.414 00:23:09.414 real 0m15.172s 00:23:09.414 user 0m26.145s 00:23:09.414 sys 0m2.312s 00:23:09.414 ************************************ 00:23:09.414 END TEST raid5f_superblock_test 00:23:09.414 ************************************ 00:23:09.414 05:20:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:09.414 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:23:09.414 05:20:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:09.414 05:20:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:09.414 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:23:09.414 ************************************ 00:23:09.414 START TEST raid5f_rebuild_test 00:23:09.414 ************************************ 00:23:09.414 05:20:28 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:09.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@544 -- # raid_pid=83473 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@545 -- # waitforlisten 83473 /var/tmp/spdk-raid.sock 00:23:09.414 05:20:28 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:09.414 05:20:28 -- common/autotest_common.sh@819 -- # '[' -z 83473 ']' 00:23:09.414 05:20:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:09.414 05:20:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:09.414 05:20:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:09.414 05:20:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:09.414 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:23:09.414 [2024-07-26 05:20:28.414198] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:09.414 [2024-07-26 05:20:28.414563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83473 ] 00:23:09.414 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:09.414 Zero copy mechanism will not be used. 00:23:09.673 [2024-07-26 05:20:28.576391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.673 [2024-07-26 05:20:28.726612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.931 [2024-07-26 05:20:28.876051] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:10.498 05:20:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:10.498 05:20:29 -- common/autotest_common.sh@852 -- # return 0 00:23:10.498 05:20:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:10.498 05:20:29 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:10.498 05:20:29 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:10.498 BaseBdev1 00:23:10.758 05:20:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:10.758 05:20:29 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:10.758 05:20:29 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:11.017 BaseBdev2 00:23:11.017 05:20:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:11.017 05:20:29 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:11.017 05:20:29 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:11.276 BaseBdev3 00:23:11.276 05:20:30 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:11.276 spare_malloc 00:23:11.276 05:20:30 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:11.535 spare_delay 00:23:11.535 05:20:30 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:11.794 [2024-07-26 05:20:30.693895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:11.794 [2024-07-26 05:20:30.694172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.794 [2024-07-26 05:20:30.694209] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:23:11.794 [2024-07-26 05:20:30.694226] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.794 [2024-07-26 05:20:30.696522] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.794 [2024-07-26 05:20:30.696567] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:11.794 spare 00:23:11.794 05:20:30 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:11.794 [2024-07-26 05:20:30.869968] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:11.794 [2024-07-26 05:20:30.872040] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:11.794 [2024-07-26 05:20:30.872094] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:11.794 [2024-07-26 05:20:30.872188] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:23:11.794 [2024-07-26 05:20:30.872203] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:11.794 [2024-07-26 05:20:30.872317] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:23:11.794 [2024-07-26 05:20:30.876847] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:23:11.794 [2024-07-26 05:20:30.877036] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:23:11.794 [2024-07-26 05:20:30.877398] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.794 05:20:30 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:11.794 05:20:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:11.794 05:20:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:11.794 05:20:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:11.794 05:20:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:11.794 05:20:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:11.795 05:20:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:11.795 05:20:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:11.795 05:20:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:11.795 05:20:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:11.795 05:20:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.795 05:20:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.054 05:20:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:12.054 "name": "raid_bdev1", 00:23:12.054 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:12.054 "strip_size_kb": 64, 00:23:12.054 "state": "online", 00:23:12.054 "raid_level": "raid5f", 00:23:12.054 "superblock": false, 00:23:12.054 "num_base_bdevs": 3, 00:23:12.054 "num_base_bdevs_discovered": 3, 00:23:12.054 "num_base_bdevs_operational": 3, 00:23:12.054 "base_bdevs_list": [ 00:23:12.054 { 00:23:12.054 "name": "BaseBdev1", 00:23:12.054 "uuid": "4b645136-9378-4616-98e6-18989b142e75", 00:23:12.054 "is_configured": true, 00:23:12.054 "data_offset": 0, 00:23:12.054 "data_size": 65536 00:23:12.054 }, 00:23:12.054 { 00:23:12.054 "name": "BaseBdev2", 00:23:12.054 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:12.054 "is_configured": true, 00:23:12.054 "data_offset": 0, 00:23:12.054 "data_size": 65536 00:23:12.054 }, 00:23:12.054 { 00:23:12.054 "name": "BaseBdev3", 00:23:12.054 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:12.054 "is_configured": true, 00:23:12.054 "data_offset": 0, 00:23:12.054 "data_size": 65536 00:23:12.054 } 00:23:12.054 ] 00:23:12.054 }' 00:23:12.054 05:20:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:12.054 05:20:31 -- common/autotest_common.sh@10 -- # set +x 00:23:12.313 05:20:31 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:12.313 05:20:31 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:12.571 [2024-07-26 05:20:31.622061] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:12.571 05:20:31 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:23:12.571 05:20:31 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.571 05:20:31 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:12.830 05:20:31 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:12.830 05:20:31 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:12.830 05:20:31 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:12.830 05:20:31 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:12.830 05:20:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:12.830 05:20:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:12.830 05:20:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:12.830 05:20:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:12.830 05:20:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:12.830 05:20:31 -- bdev/nbd_common.sh@12 -- # local i 00:23:12.830 05:20:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:12.830 05:20:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:12.830 05:20:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:13.089 [2024-07-26 05:20:31.994031] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:23:13.089 /dev/nbd0 00:23:13.089 05:20:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:13.089 05:20:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:13.089 05:20:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:13.089 05:20:32 -- common/autotest_common.sh@857 -- # local i 00:23:13.089 05:20:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:13.089 05:20:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:13.089 05:20:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:13.089 05:20:32 -- common/autotest_common.sh@861 -- # break 00:23:13.089 05:20:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:13.089 05:20:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:13.089 05:20:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:13.089 1+0 records in 00:23:13.090 1+0 records out 00:23:13.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269779 s, 15.2 MB/s 00:23:13.090 05:20:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:13.090 05:20:32 -- common/autotest_common.sh@874 -- # size=4096 00:23:13.090 05:20:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:13.090 05:20:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:13.090 05:20:32 -- common/autotest_common.sh@877 -- # return 0 00:23:13.090 05:20:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:13.090 05:20:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:13.090 05:20:32 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:13.090 05:20:32 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:13.090 05:20:32 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:13.090 05:20:32 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:23:13.349 512+0 records in 00:23:13.349 512+0 records out 00:23:13.349 67108864 bytes (67 MB, 64 MiB) copied, 0.409765 s, 164 MB/s 00:23:13.608 05:20:32 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@51 -- # local i 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:13.608 [2024-07-26 05:20:32.660122] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@41 -- # break 00:23:13.608 05:20:32 -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.608 05:20:32 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:13.868 [2024-07-26 05:20:32.837814] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.868 05:20:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.127 05:20:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:14.127 "name": "raid_bdev1", 00:23:14.127 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:14.127 "strip_size_kb": 64, 00:23:14.127 "state": "online", 00:23:14.127 "raid_level": "raid5f", 00:23:14.127 "superblock": false, 00:23:14.127 "num_base_bdevs": 3, 00:23:14.127 "num_base_bdevs_discovered": 2, 00:23:14.127 "num_base_bdevs_operational": 2, 00:23:14.127 "base_bdevs_list": [ 00:23:14.127 { 00:23:14.127 "name": null, 00:23:14.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.127 "is_configured": false, 00:23:14.127 "data_offset": 0, 00:23:14.127 "data_size": 65536 00:23:14.127 }, 00:23:14.127 { 00:23:14.127 "name": "BaseBdev2", 00:23:14.127 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:14.127 "is_configured": true, 00:23:14.127 "data_offset": 0, 00:23:14.127 "data_size": 65536 00:23:14.127 }, 00:23:14.127 { 00:23:14.127 "name": "BaseBdev3", 00:23:14.127 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:14.127 "is_configured": true, 00:23:14.127 "data_offset": 0, 00:23:14.127 "data_size": 65536 00:23:14.127 } 00:23:14.127 ] 00:23:14.127 }' 00:23:14.127 05:20:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:14.127 05:20:33 -- common/autotest_common.sh@10 -- # set +x 00:23:14.393 05:20:33 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:14.653 [2024-07-26 05:20:33.537942] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:14.653 [2024-07-26 05:20:33.537983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:14.653 [2024-07-26 05:20:33.548727] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002af30 00:23:14.653 [2024-07-26 05:20:33.554484] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:14.653 05:20:33 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:15.587 05:20:34 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:15.587 05:20:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:15.587 05:20:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:15.587 05:20:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:15.587 05:20:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:15.587 05:20:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.587 05:20:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.846 05:20:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:15.846 "name": "raid_bdev1", 00:23:15.846 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:15.846 "strip_size_kb": 64, 00:23:15.846 "state": "online", 00:23:15.846 "raid_level": "raid5f", 00:23:15.846 "superblock": false, 00:23:15.846 "num_base_bdevs": 3, 00:23:15.846 "num_base_bdevs_discovered": 3, 00:23:15.846 "num_base_bdevs_operational": 3, 00:23:15.846 "process": { 00:23:15.846 "type": "rebuild", 00:23:15.846 "target": "spare", 00:23:15.846 "progress": { 00:23:15.846 "blocks": 22528, 00:23:15.846 "percent": 17 00:23:15.846 } 00:23:15.846 }, 00:23:15.846 "base_bdevs_list": [ 00:23:15.846 { 00:23:15.846 "name": "spare", 00:23:15.846 "uuid": "cd9af27f-7a77-5943-babb-ca9f4bf29d16", 00:23:15.846 "is_configured": true, 00:23:15.846 "data_offset": 0, 00:23:15.846 "data_size": 65536 00:23:15.846 }, 00:23:15.846 { 00:23:15.846 "name": "BaseBdev2", 00:23:15.846 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:15.846 "is_configured": true, 00:23:15.846 "data_offset": 0, 00:23:15.846 "data_size": 65536 00:23:15.846 }, 00:23:15.846 { 00:23:15.846 "name": "BaseBdev3", 00:23:15.846 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:15.846 "is_configured": true, 00:23:15.846 "data_offset": 0, 00:23:15.846 "data_size": 65536 00:23:15.846 } 00:23:15.846 ] 00:23:15.846 }' 00:23:15.846 05:20:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:15.846 05:20:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:15.846 05:20:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:15.846 05:20:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:15.846 05:20:34 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:16.106 [2024-07-26 05:20:35.011924] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:16.106 [2024-07-26 05:20:35.065555] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:16.106 [2024-07-26 05:20:35.065785] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.106 05:20:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.365 05:20:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:16.365 "name": "raid_bdev1", 00:23:16.365 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:16.365 "strip_size_kb": 64, 00:23:16.365 "state": "online", 00:23:16.365 "raid_level": "raid5f", 00:23:16.365 "superblock": false, 00:23:16.365 "num_base_bdevs": 3, 00:23:16.365 "num_base_bdevs_discovered": 2, 00:23:16.365 "num_base_bdevs_operational": 2, 00:23:16.365 "base_bdevs_list": [ 00:23:16.365 { 00:23:16.365 "name": null, 00:23:16.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.365 "is_configured": false, 00:23:16.365 "data_offset": 0, 00:23:16.365 "data_size": 65536 00:23:16.365 }, 00:23:16.365 { 00:23:16.365 "name": "BaseBdev2", 00:23:16.365 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:16.365 "is_configured": true, 00:23:16.365 "data_offset": 0, 00:23:16.365 "data_size": 65536 00:23:16.365 }, 00:23:16.365 { 00:23:16.365 "name": "BaseBdev3", 00:23:16.365 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:16.365 "is_configured": true, 00:23:16.365 "data_offset": 0, 00:23:16.365 "data_size": 65536 00:23:16.365 } 00:23:16.365 ] 00:23:16.365 }' 00:23:16.365 05:20:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:16.365 05:20:35 -- common/autotest_common.sh@10 -- # set +x 00:23:16.625 05:20:35 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:16.625 05:20:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:16.625 05:20:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:16.625 05:20:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:16.625 05:20:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:16.625 05:20:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.625 05:20:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.884 05:20:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:16.884 "name": "raid_bdev1", 00:23:16.884 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:16.884 "strip_size_kb": 64, 00:23:16.884 "state": "online", 00:23:16.884 "raid_level": "raid5f", 00:23:16.884 "superblock": false, 00:23:16.884 "num_base_bdevs": 3, 00:23:16.884 "num_base_bdevs_discovered": 2, 00:23:16.884 "num_base_bdevs_operational": 2, 00:23:16.884 "base_bdevs_list": [ 00:23:16.884 { 00:23:16.884 "name": null, 00:23:16.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.884 "is_configured": false, 00:23:16.884 "data_offset": 0, 00:23:16.884 "data_size": 65536 00:23:16.884 }, 00:23:16.884 { 00:23:16.884 "name": "BaseBdev2", 00:23:16.884 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:16.884 "is_configured": true, 00:23:16.884 "data_offset": 0, 00:23:16.884 "data_size": 65536 00:23:16.884 }, 00:23:16.884 { 00:23:16.884 "name": "BaseBdev3", 00:23:16.884 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:16.884 "is_configured": true, 00:23:16.884 "data_offset": 0, 00:23:16.884 "data_size": 65536 00:23:16.884 } 00:23:16.884 ] 00:23:16.884 }' 00:23:16.884 05:20:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:16.884 05:20:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:16.884 05:20:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:16.884 05:20:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:16.884 05:20:35 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:17.143 [2024-07-26 05:20:36.053213] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:17.143 [2024-07-26 05:20:36.053256] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:17.143 [2024-07-26 05:20:36.063166] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b000 00:23:17.143 [2024-07-26 05:20:36.068746] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:17.143 05:20:36 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:18.079 05:20:37 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.079 05:20:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:18.079 05:20:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:18.079 05:20:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:18.079 05:20:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:18.079 05:20:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.079 05:20:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:18.338 "name": "raid_bdev1", 00:23:18.338 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:18.338 "strip_size_kb": 64, 00:23:18.338 "state": "online", 00:23:18.338 "raid_level": "raid5f", 00:23:18.338 "superblock": false, 00:23:18.338 "num_base_bdevs": 3, 00:23:18.338 "num_base_bdevs_discovered": 3, 00:23:18.338 "num_base_bdevs_operational": 3, 00:23:18.338 "process": { 00:23:18.338 "type": "rebuild", 00:23:18.338 "target": "spare", 00:23:18.338 "progress": { 00:23:18.338 "blocks": 24576, 00:23:18.338 "percent": 18 00:23:18.338 } 00:23:18.338 }, 00:23:18.338 "base_bdevs_list": [ 00:23:18.338 { 00:23:18.338 "name": "spare", 00:23:18.338 "uuid": "cd9af27f-7a77-5943-babb-ca9f4bf29d16", 00:23:18.338 "is_configured": true, 00:23:18.338 "data_offset": 0, 00:23:18.338 "data_size": 65536 00:23:18.338 }, 00:23:18.338 { 00:23:18.338 "name": "BaseBdev2", 00:23:18.338 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:18.338 "is_configured": true, 00:23:18.338 "data_offset": 0, 00:23:18.338 "data_size": 65536 00:23:18.338 }, 00:23:18.338 { 00:23:18.338 "name": "BaseBdev3", 00:23:18.338 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:18.338 "is_configured": true, 00:23:18.338 "data_offset": 0, 00:23:18.338 "data_size": 65536 00:23:18.338 } 00:23:18.338 ] 00:23:18.338 }' 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@657 -- # local timeout=544 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.338 05:20:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.610 05:20:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:18.610 "name": "raid_bdev1", 00:23:18.610 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:18.610 "strip_size_kb": 64, 00:23:18.610 "state": "online", 00:23:18.610 "raid_level": "raid5f", 00:23:18.610 "superblock": false, 00:23:18.610 "num_base_bdevs": 3, 00:23:18.610 "num_base_bdevs_discovered": 3, 00:23:18.610 "num_base_bdevs_operational": 3, 00:23:18.610 "process": { 00:23:18.610 "type": "rebuild", 00:23:18.610 "target": "spare", 00:23:18.610 "progress": { 00:23:18.610 "blocks": 28672, 00:23:18.610 "percent": 21 00:23:18.610 } 00:23:18.610 }, 00:23:18.610 "base_bdevs_list": [ 00:23:18.610 { 00:23:18.610 "name": "spare", 00:23:18.610 "uuid": "cd9af27f-7a77-5943-babb-ca9f4bf29d16", 00:23:18.610 "is_configured": true, 00:23:18.610 "data_offset": 0, 00:23:18.610 "data_size": 65536 00:23:18.610 }, 00:23:18.610 { 00:23:18.610 "name": "BaseBdev2", 00:23:18.610 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:18.610 "is_configured": true, 00:23:18.610 "data_offset": 0, 00:23:18.610 "data_size": 65536 00:23:18.610 }, 00:23:18.610 { 00:23:18.610 "name": "BaseBdev3", 00:23:18.610 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:18.610 "is_configured": true, 00:23:18.610 "data_offset": 0, 00:23:18.610 "data_size": 65536 00:23:18.610 } 00:23:18.610 ] 00:23:18.610 }' 00:23:18.610 05:20:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:18.610 05:20:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.610 05:20:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:18.610 05:20:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.610 05:20:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:19.557 05:20:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:19.557 05:20:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:19.557 05:20:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:19.557 05:20:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:19.557 05:20:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:19.557 05:20:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:19.557 05:20:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.557 05:20:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.816 05:20:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:19.816 "name": "raid_bdev1", 00:23:19.816 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:19.816 "strip_size_kb": 64, 00:23:19.816 "state": "online", 00:23:19.816 "raid_level": "raid5f", 00:23:19.816 "superblock": false, 00:23:19.816 "num_base_bdevs": 3, 00:23:19.816 "num_base_bdevs_discovered": 3, 00:23:19.816 "num_base_bdevs_operational": 3, 00:23:19.816 "process": { 00:23:19.816 "type": "rebuild", 00:23:19.816 "target": "spare", 00:23:19.816 "progress": { 00:23:19.816 "blocks": 57344, 00:23:19.816 "percent": 43 00:23:19.816 } 00:23:19.816 }, 00:23:19.816 "base_bdevs_list": [ 00:23:19.816 { 00:23:19.816 "name": "spare", 00:23:19.816 "uuid": "cd9af27f-7a77-5943-babb-ca9f4bf29d16", 00:23:19.816 "is_configured": true, 00:23:19.816 "data_offset": 0, 00:23:19.816 "data_size": 65536 00:23:19.816 }, 00:23:19.816 { 00:23:19.816 "name": "BaseBdev2", 00:23:19.816 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:19.816 "is_configured": true, 00:23:19.816 "data_offset": 0, 00:23:19.816 "data_size": 65536 00:23:19.816 }, 00:23:19.816 { 00:23:19.816 "name": "BaseBdev3", 00:23:19.816 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:19.816 "is_configured": true, 00:23:19.816 "data_offset": 0, 00:23:19.816 "data_size": 65536 00:23:19.816 } 00:23:19.816 ] 00:23:19.816 }' 00:23:19.816 05:20:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:20.074 05:20:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:20.074 05:20:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:20.074 05:20:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:20.074 05:20:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:21.010 05:20:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:21.010 05:20:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.010 05:20:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:21.010 05:20:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:21.010 05:20:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:21.010 05:20:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:21.010 05:20:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.010 05:20:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.268 05:20:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:21.268 "name": "raid_bdev1", 00:23:21.268 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:21.268 "strip_size_kb": 64, 00:23:21.268 "state": "online", 00:23:21.268 "raid_level": "raid5f", 00:23:21.268 "superblock": false, 00:23:21.269 "num_base_bdevs": 3, 00:23:21.269 "num_base_bdevs_discovered": 3, 00:23:21.269 "num_base_bdevs_operational": 3, 00:23:21.269 "process": { 00:23:21.269 "type": "rebuild", 00:23:21.269 "target": "spare", 00:23:21.269 "progress": { 00:23:21.269 "blocks": 81920, 00:23:21.269 "percent": 62 00:23:21.269 } 00:23:21.269 }, 00:23:21.269 "base_bdevs_list": [ 00:23:21.269 { 00:23:21.269 "name": "spare", 00:23:21.269 "uuid": "cd9af27f-7a77-5943-babb-ca9f4bf29d16", 00:23:21.269 "is_configured": true, 00:23:21.269 "data_offset": 0, 00:23:21.269 "data_size": 65536 00:23:21.269 }, 00:23:21.269 { 00:23:21.269 "name": "BaseBdev2", 00:23:21.269 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:21.269 "is_configured": true, 00:23:21.269 "data_offset": 0, 00:23:21.269 "data_size": 65536 00:23:21.269 }, 00:23:21.269 { 00:23:21.269 "name": "BaseBdev3", 00:23:21.269 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:21.269 "is_configured": true, 00:23:21.269 "data_offset": 0, 00:23:21.269 "data_size": 65536 00:23:21.269 } 00:23:21.269 ] 00:23:21.269 }' 00:23:21.269 05:20:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:21.269 05:20:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:21.269 05:20:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:21.269 05:20:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:21.269 05:20:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:22.204 05:20:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:22.204 05:20:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:22.204 05:20:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:22.204 05:20:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:22.204 05:20:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:22.204 05:20:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:22.204 05:20:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.204 05:20:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.462 05:20:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:22.462 "name": "raid_bdev1", 00:23:22.462 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:22.462 "strip_size_kb": 64, 00:23:22.462 "state": "online", 00:23:22.462 "raid_level": "raid5f", 00:23:22.462 "superblock": false, 00:23:22.462 "num_base_bdevs": 3, 00:23:22.462 "num_base_bdevs_discovered": 3, 00:23:22.462 "num_base_bdevs_operational": 3, 00:23:22.462 "process": { 00:23:22.462 "type": "rebuild", 00:23:22.462 "target": "spare", 00:23:22.462 "progress": { 00:23:22.462 "blocks": 108544, 00:23:22.462 "percent": 82 00:23:22.462 } 00:23:22.462 }, 00:23:22.462 "base_bdevs_list": [ 00:23:22.462 { 00:23:22.462 "name": "spare", 00:23:22.462 "uuid": "cd9af27f-7a77-5943-babb-ca9f4bf29d16", 00:23:22.462 "is_configured": true, 00:23:22.462 "data_offset": 0, 00:23:22.462 "data_size": 65536 00:23:22.462 }, 00:23:22.462 { 00:23:22.462 "name": "BaseBdev2", 00:23:22.462 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:22.462 "is_configured": true, 00:23:22.462 "data_offset": 0, 00:23:22.462 "data_size": 65536 00:23:22.462 }, 00:23:22.462 { 00:23:22.462 "name": "BaseBdev3", 00:23:22.462 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:22.462 "is_configured": true, 00:23:22.462 "data_offset": 0, 00:23:22.462 "data_size": 65536 00:23:22.462 } 00:23:22.462 ] 00:23:22.462 }' 00:23:22.462 05:20:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:22.462 05:20:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:22.462 05:20:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:22.462 05:20:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.462 05:20:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:23.398 05:20:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:23.398 05:20:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:23.398 05:20:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:23.398 05:20:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:23.399 05:20:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:23.399 05:20:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:23.399 05:20:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.399 05:20:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.657 [2024-07-26 05:20:42.515165] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:23.657 [2024-07-26 05:20:42.515260] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:23.657 [2024-07-26 05:20:42.515327] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.657 05:20:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:23.657 "name": "raid_bdev1", 00:23:23.657 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:23.657 "strip_size_kb": 64, 00:23:23.657 "state": "online", 00:23:23.657 "raid_level": "raid5f", 00:23:23.657 "superblock": false, 00:23:23.657 "num_base_bdevs": 3, 00:23:23.657 "num_base_bdevs_discovered": 3, 00:23:23.657 "num_base_bdevs_operational": 3, 00:23:23.657 "base_bdevs_list": [ 00:23:23.657 { 00:23:23.657 "name": "spare", 00:23:23.657 "uuid": "cd9af27f-7a77-5943-babb-ca9f4bf29d16", 00:23:23.657 "is_configured": true, 00:23:23.657 "data_offset": 0, 00:23:23.657 "data_size": 65536 00:23:23.657 }, 00:23:23.657 { 00:23:23.657 "name": "BaseBdev2", 00:23:23.657 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:23.657 "is_configured": true, 00:23:23.657 "data_offset": 0, 00:23:23.657 "data_size": 65536 00:23:23.657 }, 00:23:23.657 { 00:23:23.657 "name": "BaseBdev3", 00:23:23.657 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:23.657 "is_configured": true, 00:23:23.657 "data_offset": 0, 00:23:23.657 "data_size": 65536 00:23:23.657 } 00:23:23.657 ] 00:23:23.657 }' 00:23:23.657 05:20:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:23.657 05:20:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:23.657 05:20:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:23.657 05:20:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:23.657 05:20:42 -- bdev/bdev_raid.sh@660 -- # break 00:23:23.657 05:20:42 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:23.657 05:20:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:23.658 05:20:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:23.658 05:20:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:23.658 05:20:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:23.658 05:20:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.658 05:20:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:23.917 "name": "raid_bdev1", 00:23:23.917 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:23.917 "strip_size_kb": 64, 00:23:23.917 "state": "online", 00:23:23.917 "raid_level": "raid5f", 00:23:23.917 "superblock": false, 00:23:23.917 "num_base_bdevs": 3, 00:23:23.917 "num_base_bdevs_discovered": 3, 00:23:23.917 "num_base_bdevs_operational": 3, 00:23:23.917 "base_bdevs_list": [ 00:23:23.917 { 00:23:23.917 "name": "spare", 00:23:23.917 "uuid": "cd9af27f-7a77-5943-babb-ca9f4bf29d16", 00:23:23.917 "is_configured": true, 00:23:23.917 "data_offset": 0, 00:23:23.917 "data_size": 65536 00:23:23.917 }, 00:23:23.917 { 00:23:23.917 "name": "BaseBdev2", 00:23:23.917 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:23.917 "is_configured": true, 00:23:23.917 "data_offset": 0, 00:23:23.917 "data_size": 65536 00:23:23.917 }, 00:23:23.917 { 00:23:23.917 "name": "BaseBdev3", 00:23:23.917 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:23.917 "is_configured": true, 00:23:23.917 "data_offset": 0, 00:23:23.917 "data_size": 65536 00:23:23.917 } 00:23:23.917 ] 00:23:23.917 }' 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.917 05:20:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.176 05:20:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:24.176 "name": "raid_bdev1", 00:23:24.176 "uuid": "284c836a-3d12-40f3-b73b-c79cc164209e", 00:23:24.176 "strip_size_kb": 64, 00:23:24.176 "state": "online", 00:23:24.176 "raid_level": "raid5f", 00:23:24.176 "superblock": false, 00:23:24.176 "num_base_bdevs": 3, 00:23:24.176 "num_base_bdevs_discovered": 3, 00:23:24.176 "num_base_bdevs_operational": 3, 00:23:24.176 "base_bdevs_list": [ 00:23:24.176 { 00:23:24.176 "name": "spare", 00:23:24.176 "uuid": "cd9af27f-7a77-5943-babb-ca9f4bf29d16", 00:23:24.176 "is_configured": true, 00:23:24.176 "data_offset": 0, 00:23:24.176 "data_size": 65536 00:23:24.176 }, 00:23:24.176 { 00:23:24.176 "name": "BaseBdev2", 00:23:24.176 "uuid": "6090243c-d311-4957-8130-24a34509eb30", 00:23:24.176 "is_configured": true, 00:23:24.176 "data_offset": 0, 00:23:24.176 "data_size": 65536 00:23:24.176 }, 00:23:24.176 { 00:23:24.176 "name": "BaseBdev3", 00:23:24.176 "uuid": "94798873-8358-451c-9335-b7571b3cf39c", 00:23:24.176 "is_configured": true, 00:23:24.176 "data_offset": 0, 00:23:24.176 "data_size": 65536 00:23:24.176 } 00:23:24.176 ] 00:23:24.176 }' 00:23:24.176 05:20:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:24.176 05:20:43 -- common/autotest_common.sh@10 -- # set +x 00:23:24.437 05:20:43 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:24.701 [2024-07-26 05:20:43.710548] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:24.701 [2024-07-26 05:20:43.710583] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:24.701 [2024-07-26 05:20:43.710662] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.701 [2024-07-26 05:20:43.710785] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:24.701 [2024-07-26 05:20:43.710806] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:23:24.701 05:20:43 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.701 05:20:43 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:24.960 05:20:43 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:24.960 05:20:43 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:24.960 05:20:43 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:24.960 05:20:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:24.960 05:20:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:24.960 05:20:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:24.960 05:20:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:24.960 05:20:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:24.960 05:20:43 -- bdev/nbd_common.sh@12 -- # local i 00:23:24.960 05:20:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:24.960 05:20:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:24.960 05:20:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:25.219 /dev/nbd0 00:23:25.219 05:20:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:25.219 05:20:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:25.219 05:20:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:25.219 05:20:44 -- common/autotest_common.sh@857 -- # local i 00:23:25.219 05:20:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:25.219 05:20:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:25.219 05:20:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:25.219 05:20:44 -- common/autotest_common.sh@861 -- # break 00:23:25.219 05:20:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:25.219 05:20:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:25.219 05:20:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:25.219 1+0 records in 00:23:25.219 1+0 records out 00:23:25.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250115 s, 16.4 MB/s 00:23:25.219 05:20:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:25.219 05:20:44 -- common/autotest_common.sh@874 -- # size=4096 00:23:25.219 05:20:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:25.219 05:20:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:25.219 05:20:44 -- common/autotest_common.sh@877 -- # return 0 00:23:25.219 05:20:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:25.219 05:20:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:25.219 05:20:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:25.478 /dev/nbd1 00:23:25.478 05:20:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:25.478 05:20:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:25.478 05:20:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:25.478 05:20:44 -- common/autotest_common.sh@857 -- # local i 00:23:25.478 05:20:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:25.478 05:20:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:25.478 05:20:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:25.478 05:20:44 -- common/autotest_common.sh@861 -- # break 00:23:25.478 05:20:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:25.478 05:20:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:25.478 05:20:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:25.478 1+0 records in 00:23:25.478 1+0 records out 00:23:25.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264948 s, 15.5 MB/s 00:23:25.478 05:20:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:25.478 05:20:44 -- common/autotest_common.sh@874 -- # size=4096 00:23:25.478 05:20:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:25.478 05:20:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:25.478 05:20:44 -- common/autotest_common.sh@877 -- # return 0 00:23:25.478 05:20:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:25.478 05:20:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:25.478 05:20:44 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:25.737 05:20:44 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:25.737 05:20:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:25.737 05:20:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:25.737 05:20:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@51 -- # local i 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@41 -- # break 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@45 -- # return 0 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:25.738 05:20:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:25.997 05:20:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:25.997 05:20:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:25.997 05:20:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:25.997 05:20:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:25.997 05:20:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:25.997 05:20:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:25.997 05:20:45 -- bdev/nbd_common.sh@41 -- # break 00:23:25.997 05:20:45 -- bdev/nbd_common.sh@45 -- # return 0 00:23:25.997 05:20:45 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:25.997 05:20:45 -- bdev/bdev_raid.sh@709 -- # killprocess 83473 00:23:25.997 05:20:45 -- common/autotest_common.sh@926 -- # '[' -z 83473 ']' 00:23:25.997 05:20:45 -- common/autotest_common.sh@930 -- # kill -0 83473 00:23:25.997 05:20:45 -- common/autotest_common.sh@931 -- # uname 00:23:25.997 05:20:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:25.997 05:20:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83473 00:23:25.997 05:20:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:25.997 05:20:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:25.997 05:20:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83473' 00:23:25.997 killing process with pid 83473 00:23:25.997 05:20:45 -- common/autotest_common.sh@945 -- # kill 83473 00:23:25.997 Received shutdown signal, test time was about 60.000000 seconds 00:23:25.997 00:23:25.997 Latency(us) 00:23:25.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.997 =================================================================================================================== 00:23:25.997 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:25.997 [2024-07-26 05:20:45.064319] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:25.997 05:20:45 -- common/autotest_common.sh@950 -- # wait 83473 00:23:26.256 [2024-07-26 05:20:45.315566] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:27.193 00:23:27.193 real 0m17.877s 00:23:27.193 user 0m25.002s 00:23:27.193 sys 0m2.401s 00:23:27.193 05:20:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:27.193 05:20:46 -- common/autotest_common.sh@10 -- # set +x 00:23:27.193 ************************************ 00:23:27.193 END TEST raid5f_rebuild_test 00:23:27.193 ************************************ 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:23:27.193 05:20:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:27.193 05:20:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:27.193 05:20:46 -- common/autotest_common.sh@10 -- # set +x 00:23:27.193 ************************************ 00:23:27.193 START TEST raid5f_rebuild_test_sb 00:23:27.193 ************************************ 00:23:27.193 05:20:46 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:27.193 05:20:46 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@544 -- # raid_pid=83954 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@545 -- # waitforlisten 83954 /var/tmp/spdk-raid.sock 00:23:27.194 05:20:46 -- common/autotest_common.sh@819 -- # '[' -z 83954 ']' 00:23:27.194 05:20:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:27.194 05:20:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:27.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:27.194 05:20:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:27.194 05:20:46 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:27.194 05:20:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:27.194 05:20:46 -- common/autotest_common.sh@10 -- # set +x 00:23:27.453 [2024-07-26 05:20:46.337440] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:27.453 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:27.453 Zero copy mechanism will not be used. 00:23:27.453 [2024-07-26 05:20:46.337655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83954 ] 00:23:27.453 [2024-07-26 05:20:46.508499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.712 [2024-07-26 05:20:46.658792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.712 [2024-07-26 05:20:46.805781] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:28.279 05:20:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:28.279 05:20:47 -- common/autotest_common.sh@852 -- # return 0 00:23:28.279 05:20:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:28.279 05:20:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:28.279 05:20:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:28.538 BaseBdev1_malloc 00:23:28.538 05:20:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:28.538 [2024-07-26 05:20:47.571011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:28.538 [2024-07-26 05:20:47.571158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.538 [2024-07-26 05:20:47.571190] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:23:28.538 [2024-07-26 05:20:47.571205] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.538 [2024-07-26 05:20:47.573284] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.538 [2024-07-26 05:20:47.573327] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:28.538 BaseBdev1 00:23:28.538 05:20:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:28.538 05:20:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:28.538 05:20:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:28.796 BaseBdev2_malloc 00:23:28.796 05:20:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:29.055 [2024-07-26 05:20:47.956351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:29.055 [2024-07-26 05:20:47.956430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.055 [2024-07-26 05:20:47.956483] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:23:29.055 [2024-07-26 05:20:47.956500] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.055 [2024-07-26 05:20:47.958607] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.055 [2024-07-26 05:20:47.958650] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:29.055 BaseBdev2 00:23:29.055 05:20:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:29.055 05:20:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:29.055 05:20:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:29.055 BaseBdev3_malloc 00:23:29.314 05:20:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:29.314 [2024-07-26 05:20:48.332887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:29.314 [2024-07-26 05:20:48.332982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.314 [2024-07-26 05:20:48.333010] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:23:29.314 [2024-07-26 05:20:48.333040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.314 [2024-07-26 05:20:48.335377] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.314 [2024-07-26 05:20:48.335439] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:29.314 BaseBdev3 00:23:29.314 05:20:48 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:29.573 spare_malloc 00:23:29.573 05:20:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:29.831 spare_delay 00:23:29.831 05:20:48 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:30.090 [2024-07-26 05:20:48.957222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:30.090 [2024-07-26 05:20:48.957324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.090 [2024-07-26 05:20:48.957352] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:23:30.090 [2024-07-26 05:20:48.957367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.090 [2024-07-26 05:20:48.960162] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.090 [2024-07-26 05:20:48.960209] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:30.090 spare 00:23:30.091 05:20:48 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:30.091 [2024-07-26 05:20:49.133297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:30.091 [2024-07-26 05:20:49.135049] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:30.091 [2024-07-26 05:20:49.135126] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:30.091 [2024-07-26 05:20:49.135324] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:23:30.091 [2024-07-26 05:20:49.135370] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:30.091 [2024-07-26 05:20:49.135486] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:23:30.091 [2024-07-26 05:20:49.139734] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:23:30.091 [2024-07-26 05:20:49.139764] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:23:30.091 [2024-07-26 05:20:49.139942] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.091 05:20:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.349 05:20:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:30.349 "name": "raid_bdev1", 00:23:30.349 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:30.349 "strip_size_kb": 64, 00:23:30.349 "state": "online", 00:23:30.349 "raid_level": "raid5f", 00:23:30.349 "superblock": true, 00:23:30.349 "num_base_bdevs": 3, 00:23:30.349 "num_base_bdevs_discovered": 3, 00:23:30.349 "num_base_bdevs_operational": 3, 00:23:30.349 "base_bdevs_list": [ 00:23:30.349 { 00:23:30.349 "name": "BaseBdev1", 00:23:30.349 "uuid": "c23f290c-404c-5c4a-a6a7-343a3a8186f2", 00:23:30.349 "is_configured": true, 00:23:30.349 "data_offset": 2048, 00:23:30.349 "data_size": 63488 00:23:30.349 }, 00:23:30.349 { 00:23:30.349 "name": "BaseBdev2", 00:23:30.349 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:30.349 "is_configured": true, 00:23:30.349 "data_offset": 2048, 00:23:30.349 "data_size": 63488 00:23:30.349 }, 00:23:30.349 { 00:23:30.349 "name": "BaseBdev3", 00:23:30.349 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:30.349 "is_configured": true, 00:23:30.349 "data_offset": 2048, 00:23:30.349 "data_size": 63488 00:23:30.349 } 00:23:30.349 ] 00:23:30.349 }' 00:23:30.349 05:20:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:30.349 05:20:49 -- common/autotest_common.sh@10 -- # set +x 00:23:30.608 05:20:49 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:30.608 05:20:49 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:30.867 [2024-07-26 05:20:49.832619] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.867 05:20:49 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:23:30.867 05:20:49 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.867 05:20:49 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:31.125 05:20:50 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:31.125 05:20:50 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:31.125 05:20:50 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:31.125 05:20:50 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:31.125 05:20:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:31.125 05:20:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:31.125 05:20:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:31.125 05:20:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:31.125 05:20:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:31.125 05:20:50 -- bdev/nbd_common.sh@12 -- # local i 00:23:31.125 05:20:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:31.125 05:20:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:31.125 05:20:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:31.125 [2024-07-26 05:20:50.204580] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:23:31.125 /dev/nbd0 00:23:31.125 05:20:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:31.385 05:20:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:31.385 05:20:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:31.385 05:20:50 -- common/autotest_common.sh@857 -- # local i 00:23:31.385 05:20:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:31.385 05:20:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:31.385 05:20:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:31.385 05:20:50 -- common/autotest_common.sh@861 -- # break 00:23:31.385 05:20:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:31.385 05:20:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:31.385 05:20:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:31.385 1+0 records in 00:23:31.385 1+0 records out 00:23:31.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195151 s, 21.0 MB/s 00:23:31.385 05:20:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.385 05:20:50 -- common/autotest_common.sh@874 -- # size=4096 00:23:31.385 05:20:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.385 05:20:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:31.385 05:20:50 -- common/autotest_common.sh@877 -- # return 0 00:23:31.385 05:20:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:31.385 05:20:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:31.385 05:20:50 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:31.385 05:20:50 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:31.385 05:20:50 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:31.385 05:20:50 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:23:31.644 496+0 records in 00:23:31.644 496+0 records out 00:23:31.644 65011712 bytes (65 MB, 62 MiB) copied, 0.320706 s, 203 MB/s 00:23:31.644 05:20:50 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:31.644 05:20:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:31.644 05:20:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:31.644 05:20:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:31.644 05:20:50 -- bdev/nbd_common.sh@51 -- # local i 00:23:31.644 05:20:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:31.644 05:20:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:31.904 [2024-07-26 05:20:50.769344] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.904 05:20:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:31.904 05:20:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:31.904 05:20:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:31.904 05:20:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:31.904 05:20:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:31.904 05:20:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:31.904 05:20:50 -- bdev/nbd_common.sh@41 -- # break 00:23:31.904 05:20:50 -- bdev/nbd_common.sh@45 -- # return 0 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:31.904 [2024-07-26 05:20:50.954769] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.904 05:20:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.163 05:20:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:32.163 "name": "raid_bdev1", 00:23:32.163 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:32.163 "strip_size_kb": 64, 00:23:32.163 "state": "online", 00:23:32.163 "raid_level": "raid5f", 00:23:32.163 "superblock": true, 00:23:32.163 "num_base_bdevs": 3, 00:23:32.163 "num_base_bdevs_discovered": 2, 00:23:32.163 "num_base_bdevs_operational": 2, 00:23:32.163 "base_bdevs_list": [ 00:23:32.163 { 00:23:32.163 "name": null, 00:23:32.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.163 "is_configured": false, 00:23:32.163 "data_offset": 2048, 00:23:32.163 "data_size": 63488 00:23:32.163 }, 00:23:32.163 { 00:23:32.163 "name": "BaseBdev2", 00:23:32.163 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:32.163 "is_configured": true, 00:23:32.163 "data_offset": 2048, 00:23:32.163 "data_size": 63488 00:23:32.163 }, 00:23:32.163 { 00:23:32.163 "name": "BaseBdev3", 00:23:32.163 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:32.163 "is_configured": true, 00:23:32.163 "data_offset": 2048, 00:23:32.163 "data_size": 63488 00:23:32.163 } 00:23:32.163 ] 00:23:32.163 }' 00:23:32.163 05:20:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:32.163 05:20:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.422 05:20:51 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:32.681 [2024-07-26 05:20:51.642953] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:32.681 [2024-07-26 05:20:51.643235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:32.681 [2024-07-26 05:20:51.653928] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028830 00:23:32.681 [2024-07-26 05:20:51.659785] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:32.681 05:20:51 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:33.663 05:20:52 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.663 05:20:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:33.663 05:20:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:33.663 05:20:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:33.663 05:20:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:33.663 05:20:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.663 05:20:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.935 05:20:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:33.935 "name": "raid_bdev1", 00:23:33.935 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:33.935 "strip_size_kb": 64, 00:23:33.935 "state": "online", 00:23:33.935 "raid_level": "raid5f", 00:23:33.935 "superblock": true, 00:23:33.935 "num_base_bdevs": 3, 00:23:33.935 "num_base_bdevs_discovered": 3, 00:23:33.935 "num_base_bdevs_operational": 3, 00:23:33.935 "process": { 00:23:33.935 "type": "rebuild", 00:23:33.935 "target": "spare", 00:23:33.935 "progress": { 00:23:33.935 "blocks": 22528, 00:23:33.935 "percent": 17 00:23:33.935 } 00:23:33.935 }, 00:23:33.935 "base_bdevs_list": [ 00:23:33.935 { 00:23:33.935 "name": "spare", 00:23:33.935 "uuid": "4a9a0765-de85-5b4b-a9c7-f98c78f63981", 00:23:33.935 "is_configured": true, 00:23:33.935 "data_offset": 2048, 00:23:33.935 "data_size": 63488 00:23:33.935 }, 00:23:33.935 { 00:23:33.935 "name": "BaseBdev2", 00:23:33.935 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:33.935 "is_configured": true, 00:23:33.935 "data_offset": 2048, 00:23:33.935 "data_size": 63488 00:23:33.935 }, 00:23:33.935 { 00:23:33.935 "name": "BaseBdev3", 00:23:33.935 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:33.935 "is_configured": true, 00:23:33.935 "data_offset": 2048, 00:23:33.935 "data_size": 63488 00:23:33.935 } 00:23:33.935 ] 00:23:33.935 }' 00:23:33.935 05:20:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:33.935 05:20:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.935 05:20:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:33.935 05:20:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.935 05:20:52 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:34.194 [2024-07-26 05:20:53.121698] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:34.194 [2024-07-26 05:20:53.172357] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:34.194 [2024-07-26 05:20:53.172441] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.194 05:20:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.459 05:20:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:34.459 "name": "raid_bdev1", 00:23:34.459 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:34.459 "strip_size_kb": 64, 00:23:34.459 "state": "online", 00:23:34.459 "raid_level": "raid5f", 00:23:34.459 "superblock": true, 00:23:34.459 "num_base_bdevs": 3, 00:23:34.459 "num_base_bdevs_discovered": 2, 00:23:34.459 "num_base_bdevs_operational": 2, 00:23:34.459 "base_bdevs_list": [ 00:23:34.459 { 00:23:34.459 "name": null, 00:23:34.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.459 "is_configured": false, 00:23:34.459 "data_offset": 2048, 00:23:34.459 "data_size": 63488 00:23:34.459 }, 00:23:34.459 { 00:23:34.459 "name": "BaseBdev2", 00:23:34.459 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:34.459 "is_configured": true, 00:23:34.459 "data_offset": 2048, 00:23:34.459 "data_size": 63488 00:23:34.459 }, 00:23:34.459 { 00:23:34.459 "name": "BaseBdev3", 00:23:34.459 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:34.459 "is_configured": true, 00:23:34.459 "data_offset": 2048, 00:23:34.459 "data_size": 63488 00:23:34.459 } 00:23:34.459 ] 00:23:34.459 }' 00:23:34.459 05:20:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:34.459 05:20:53 -- common/autotest_common.sh@10 -- # set +x 00:23:34.717 05:20:53 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:34.717 05:20:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:34.717 05:20:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:34.717 05:20:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:34.717 05:20:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:34.717 05:20:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.717 05:20:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.976 05:20:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:34.976 "name": "raid_bdev1", 00:23:34.976 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:34.976 "strip_size_kb": 64, 00:23:34.976 "state": "online", 00:23:34.976 "raid_level": "raid5f", 00:23:34.976 "superblock": true, 00:23:34.976 "num_base_bdevs": 3, 00:23:34.976 "num_base_bdevs_discovered": 2, 00:23:34.976 "num_base_bdevs_operational": 2, 00:23:34.976 "base_bdevs_list": [ 00:23:34.976 { 00:23:34.976 "name": null, 00:23:34.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.976 "is_configured": false, 00:23:34.976 "data_offset": 2048, 00:23:34.976 "data_size": 63488 00:23:34.976 }, 00:23:34.976 { 00:23:34.976 "name": "BaseBdev2", 00:23:34.976 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:34.976 "is_configured": true, 00:23:34.976 "data_offset": 2048, 00:23:34.976 "data_size": 63488 00:23:34.976 }, 00:23:34.976 { 00:23:34.976 "name": "BaseBdev3", 00:23:34.976 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:34.977 "is_configured": true, 00:23:34.977 "data_offset": 2048, 00:23:34.977 "data_size": 63488 00:23:34.977 } 00:23:34.977 ] 00:23:34.977 }' 00:23:34.977 05:20:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:34.977 05:20:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:34.977 05:20:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:34.977 05:20:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:34.977 05:20:53 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:34.977 [2024-07-26 05:20:54.075706] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:34.977 [2024-07-26 05:20:54.075749] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:34.977 [2024-07-26 05:20:54.086056] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028900 00:23:35.235 [2024-07-26 05:20:54.092544] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:35.236 05:20:54 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:36.172 05:20:55 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.172 05:20:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:36.172 05:20:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:36.172 05:20:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:36.172 05:20:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:36.172 05:20:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.172 05:20:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.431 05:20:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:36.431 "name": "raid_bdev1", 00:23:36.431 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:36.431 "strip_size_kb": 64, 00:23:36.431 "state": "online", 00:23:36.431 "raid_level": "raid5f", 00:23:36.431 "superblock": true, 00:23:36.431 "num_base_bdevs": 3, 00:23:36.431 "num_base_bdevs_discovered": 3, 00:23:36.431 "num_base_bdevs_operational": 3, 00:23:36.431 "process": { 00:23:36.431 "type": "rebuild", 00:23:36.431 "target": "spare", 00:23:36.431 "progress": { 00:23:36.431 "blocks": 24576, 00:23:36.431 "percent": 19 00:23:36.431 } 00:23:36.431 }, 00:23:36.431 "base_bdevs_list": [ 00:23:36.431 { 00:23:36.431 "name": "spare", 00:23:36.431 "uuid": "4a9a0765-de85-5b4b-a9c7-f98c78f63981", 00:23:36.431 "is_configured": true, 00:23:36.431 "data_offset": 2048, 00:23:36.431 "data_size": 63488 00:23:36.431 }, 00:23:36.431 { 00:23:36.431 "name": "BaseBdev2", 00:23:36.431 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:36.431 "is_configured": true, 00:23:36.431 "data_offset": 2048, 00:23:36.431 "data_size": 63488 00:23:36.431 }, 00:23:36.432 { 00:23:36.432 "name": "BaseBdev3", 00:23:36.432 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:36.432 "is_configured": true, 00:23:36.432 "data_offset": 2048, 00:23:36.432 "data_size": 63488 00:23:36.432 } 00:23:36.432 ] 00:23:36.432 }' 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:36.432 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@657 -- # local timeout=562 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.432 05:20:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.691 05:20:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:36.691 "name": "raid_bdev1", 00:23:36.691 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:36.691 "strip_size_kb": 64, 00:23:36.691 "state": "online", 00:23:36.691 "raid_level": "raid5f", 00:23:36.691 "superblock": true, 00:23:36.691 "num_base_bdevs": 3, 00:23:36.691 "num_base_bdevs_discovered": 3, 00:23:36.691 "num_base_bdevs_operational": 3, 00:23:36.691 "process": { 00:23:36.691 "type": "rebuild", 00:23:36.691 "target": "spare", 00:23:36.691 "progress": { 00:23:36.691 "blocks": 28672, 00:23:36.691 "percent": 22 00:23:36.691 } 00:23:36.691 }, 00:23:36.691 "base_bdevs_list": [ 00:23:36.691 { 00:23:36.691 "name": "spare", 00:23:36.691 "uuid": "4a9a0765-de85-5b4b-a9c7-f98c78f63981", 00:23:36.691 "is_configured": true, 00:23:36.691 "data_offset": 2048, 00:23:36.691 "data_size": 63488 00:23:36.691 }, 00:23:36.691 { 00:23:36.691 "name": "BaseBdev2", 00:23:36.691 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:36.691 "is_configured": true, 00:23:36.691 "data_offset": 2048, 00:23:36.691 "data_size": 63488 00:23:36.691 }, 00:23:36.691 { 00:23:36.691 "name": "BaseBdev3", 00:23:36.691 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:36.691 "is_configured": true, 00:23:36.691 "data_offset": 2048, 00:23:36.691 "data_size": 63488 00:23:36.691 } 00:23:36.691 ] 00:23:36.691 }' 00:23:36.691 05:20:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:36.691 05:20:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.691 05:20:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:36.691 05:20:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.691 05:20:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:37.627 05:20:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:37.627 05:20:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:37.627 05:20:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:37.627 05:20:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:37.627 05:20:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:37.627 05:20:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:37.627 05:20:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.627 05:20:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.886 05:20:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:37.886 "name": "raid_bdev1", 00:23:37.886 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:37.886 "strip_size_kb": 64, 00:23:37.886 "state": "online", 00:23:37.886 "raid_level": "raid5f", 00:23:37.886 "superblock": true, 00:23:37.886 "num_base_bdevs": 3, 00:23:37.886 "num_base_bdevs_discovered": 3, 00:23:37.886 "num_base_bdevs_operational": 3, 00:23:37.886 "process": { 00:23:37.886 "type": "rebuild", 00:23:37.886 "target": "spare", 00:23:37.886 "progress": { 00:23:37.886 "blocks": 53248, 00:23:37.886 "percent": 41 00:23:37.886 } 00:23:37.886 }, 00:23:37.886 "base_bdevs_list": [ 00:23:37.886 { 00:23:37.886 "name": "spare", 00:23:37.886 "uuid": "4a9a0765-de85-5b4b-a9c7-f98c78f63981", 00:23:37.886 "is_configured": true, 00:23:37.886 "data_offset": 2048, 00:23:37.886 "data_size": 63488 00:23:37.886 }, 00:23:37.886 { 00:23:37.886 "name": "BaseBdev2", 00:23:37.886 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:37.886 "is_configured": true, 00:23:37.886 "data_offset": 2048, 00:23:37.886 "data_size": 63488 00:23:37.886 }, 00:23:37.886 { 00:23:37.886 "name": "BaseBdev3", 00:23:37.886 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:37.886 "is_configured": true, 00:23:37.886 "data_offset": 2048, 00:23:37.886 "data_size": 63488 00:23:37.886 } 00:23:37.886 ] 00:23:37.886 }' 00:23:37.886 05:20:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:37.886 05:20:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:37.886 05:20:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:37.886 05:20:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:37.886 05:20:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:38.823 05:20:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:38.823 05:20:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.823 05:20:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:38.823 05:20:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:38.823 05:20:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:38.823 05:20:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:38.823 05:20:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.823 05:20:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.082 05:20:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:39.082 "name": "raid_bdev1", 00:23:39.082 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:39.082 "strip_size_kb": 64, 00:23:39.082 "state": "online", 00:23:39.082 "raid_level": "raid5f", 00:23:39.082 "superblock": true, 00:23:39.082 "num_base_bdevs": 3, 00:23:39.082 "num_base_bdevs_discovered": 3, 00:23:39.082 "num_base_bdevs_operational": 3, 00:23:39.082 "process": { 00:23:39.082 "type": "rebuild", 00:23:39.082 "target": "spare", 00:23:39.082 "progress": { 00:23:39.082 "blocks": 79872, 00:23:39.082 "percent": 62 00:23:39.082 } 00:23:39.082 }, 00:23:39.082 "base_bdevs_list": [ 00:23:39.082 { 00:23:39.082 "name": "spare", 00:23:39.082 "uuid": "4a9a0765-de85-5b4b-a9c7-f98c78f63981", 00:23:39.082 "is_configured": true, 00:23:39.082 "data_offset": 2048, 00:23:39.082 "data_size": 63488 00:23:39.082 }, 00:23:39.082 { 00:23:39.082 "name": "BaseBdev2", 00:23:39.082 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:39.082 "is_configured": true, 00:23:39.082 "data_offset": 2048, 00:23:39.082 "data_size": 63488 00:23:39.082 }, 00:23:39.082 { 00:23:39.082 "name": "BaseBdev3", 00:23:39.082 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:39.082 "is_configured": true, 00:23:39.082 "data_offset": 2048, 00:23:39.082 "data_size": 63488 00:23:39.082 } 00:23:39.082 ] 00:23:39.082 }' 00:23:39.082 05:20:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:39.082 05:20:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.082 05:20:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:39.082 05:20:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.082 05:20:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:40.019 05:20:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:40.019 05:20:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.019 05:20:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:40.019 05:20:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:40.019 05:20:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:40.019 05:20:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:40.019 05:20:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.019 05:20:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.278 05:20:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:40.278 "name": "raid_bdev1", 00:23:40.278 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:40.278 "strip_size_kb": 64, 00:23:40.278 "state": "online", 00:23:40.278 "raid_level": "raid5f", 00:23:40.278 "superblock": true, 00:23:40.278 "num_base_bdevs": 3, 00:23:40.278 "num_base_bdevs_discovered": 3, 00:23:40.278 "num_base_bdevs_operational": 3, 00:23:40.278 "process": { 00:23:40.278 "type": "rebuild", 00:23:40.278 "target": "spare", 00:23:40.278 "progress": { 00:23:40.278 "blocks": 104448, 00:23:40.278 "percent": 82 00:23:40.278 } 00:23:40.278 }, 00:23:40.278 "base_bdevs_list": [ 00:23:40.278 { 00:23:40.278 "name": "spare", 00:23:40.278 "uuid": "4a9a0765-de85-5b4b-a9c7-f98c78f63981", 00:23:40.278 "is_configured": true, 00:23:40.278 "data_offset": 2048, 00:23:40.278 "data_size": 63488 00:23:40.278 }, 00:23:40.278 { 00:23:40.278 "name": "BaseBdev2", 00:23:40.278 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:40.278 "is_configured": true, 00:23:40.278 "data_offset": 2048, 00:23:40.278 "data_size": 63488 00:23:40.278 }, 00:23:40.278 { 00:23:40.278 "name": "BaseBdev3", 00:23:40.278 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:40.278 "is_configured": true, 00:23:40.278 "data_offset": 2048, 00:23:40.278 "data_size": 63488 00:23:40.278 } 00:23:40.278 ] 00:23:40.278 }' 00:23:40.278 05:20:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:40.278 05:20:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:40.278 05:20:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:40.278 05:20:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:40.278 05:20:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:41.657 [2024-07-26 05:21:00.340998] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:41.657 [2024-07-26 05:21:00.341091] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:41.657 [2024-07-26 05:21:00.341241] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:41.657 "name": "raid_bdev1", 00:23:41.657 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:41.657 "strip_size_kb": 64, 00:23:41.657 "state": "online", 00:23:41.657 "raid_level": "raid5f", 00:23:41.657 "superblock": true, 00:23:41.657 "num_base_bdevs": 3, 00:23:41.657 "num_base_bdevs_discovered": 3, 00:23:41.657 "num_base_bdevs_operational": 3, 00:23:41.657 "base_bdevs_list": [ 00:23:41.657 { 00:23:41.657 "name": "spare", 00:23:41.657 "uuid": "4a9a0765-de85-5b4b-a9c7-f98c78f63981", 00:23:41.657 "is_configured": true, 00:23:41.657 "data_offset": 2048, 00:23:41.657 "data_size": 63488 00:23:41.657 }, 00:23:41.657 { 00:23:41.657 "name": "BaseBdev2", 00:23:41.657 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:41.657 "is_configured": true, 00:23:41.657 "data_offset": 2048, 00:23:41.657 "data_size": 63488 00:23:41.657 }, 00:23:41.657 { 00:23:41.657 "name": "BaseBdev3", 00:23:41.657 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:41.657 "is_configured": true, 00:23:41.657 "data_offset": 2048, 00:23:41.657 "data_size": 63488 00:23:41.657 } 00:23:41.657 ] 00:23:41.657 }' 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@660 -- # break 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.657 05:21:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.916 05:21:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:41.916 "name": "raid_bdev1", 00:23:41.916 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:41.916 "strip_size_kb": 64, 00:23:41.916 "state": "online", 00:23:41.916 "raid_level": "raid5f", 00:23:41.916 "superblock": true, 00:23:41.916 "num_base_bdevs": 3, 00:23:41.916 "num_base_bdevs_discovered": 3, 00:23:41.916 "num_base_bdevs_operational": 3, 00:23:41.916 "base_bdevs_list": [ 00:23:41.916 { 00:23:41.916 "name": "spare", 00:23:41.916 "uuid": "4a9a0765-de85-5b4b-a9c7-f98c78f63981", 00:23:41.916 "is_configured": true, 00:23:41.916 "data_offset": 2048, 00:23:41.916 "data_size": 63488 00:23:41.916 }, 00:23:41.916 { 00:23:41.916 "name": "BaseBdev2", 00:23:41.916 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:41.916 "is_configured": true, 00:23:41.916 "data_offset": 2048, 00:23:41.916 "data_size": 63488 00:23:41.916 }, 00:23:41.916 { 00:23:41.916 "name": "BaseBdev3", 00:23:41.916 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:41.916 "is_configured": true, 00:23:41.916 "data_offset": 2048, 00:23:41.916 "data_size": 63488 00:23:41.916 } 00:23:41.916 ] 00:23:41.916 }' 00:23:41.916 05:21:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.917 05:21:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.175 05:21:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:42.176 "name": "raid_bdev1", 00:23:42.176 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:42.176 "strip_size_kb": 64, 00:23:42.176 "state": "online", 00:23:42.176 "raid_level": "raid5f", 00:23:42.176 "superblock": true, 00:23:42.176 "num_base_bdevs": 3, 00:23:42.176 "num_base_bdevs_discovered": 3, 00:23:42.176 "num_base_bdevs_operational": 3, 00:23:42.176 "base_bdevs_list": [ 00:23:42.176 { 00:23:42.176 "name": "spare", 00:23:42.176 "uuid": "4a9a0765-de85-5b4b-a9c7-f98c78f63981", 00:23:42.176 "is_configured": true, 00:23:42.176 "data_offset": 2048, 00:23:42.176 "data_size": 63488 00:23:42.176 }, 00:23:42.176 { 00:23:42.176 "name": "BaseBdev2", 00:23:42.176 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:42.176 "is_configured": true, 00:23:42.176 "data_offset": 2048, 00:23:42.176 "data_size": 63488 00:23:42.176 }, 00:23:42.176 { 00:23:42.176 "name": "BaseBdev3", 00:23:42.176 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:42.176 "is_configured": true, 00:23:42.176 "data_offset": 2048, 00:23:42.176 "data_size": 63488 00:23:42.176 } 00:23:42.176 ] 00:23:42.176 }' 00:23:42.176 05:21:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:42.176 05:21:01 -- common/autotest_common.sh@10 -- # set +x 00:23:42.434 05:21:01 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:42.693 [2024-07-26 05:21:01.636929] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:42.693 [2024-07-26 05:21:01.637149] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:42.693 [2024-07-26 05:21:01.637290] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:42.693 [2024-07-26 05:21:01.637383] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:42.693 [2024-07-26 05:21:01.637418] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:23:42.693 05:21:01 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.693 05:21:01 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:42.952 05:21:01 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:42.952 05:21:01 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:42.952 05:21:01 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:42.952 05:21:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:42.952 05:21:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:42.952 05:21:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:42.952 05:21:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:42.952 05:21:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:42.952 05:21:01 -- bdev/nbd_common.sh@12 -- # local i 00:23:42.952 05:21:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:42.952 05:21:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:42.952 05:21:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:43.211 /dev/nbd0 00:23:43.211 05:21:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:43.211 05:21:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:43.211 05:21:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:43.211 05:21:02 -- common/autotest_common.sh@857 -- # local i 00:23:43.211 05:21:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:43.211 05:21:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:43.211 05:21:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:43.211 05:21:02 -- common/autotest_common.sh@861 -- # break 00:23:43.211 05:21:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:43.211 05:21:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:43.211 05:21:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:43.211 1+0 records in 00:23:43.211 1+0 records out 00:23:43.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288453 s, 14.2 MB/s 00:23:43.211 05:21:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:43.211 05:21:02 -- common/autotest_common.sh@874 -- # size=4096 00:23:43.211 05:21:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:43.211 05:21:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:43.211 05:21:02 -- common/autotest_common.sh@877 -- # return 0 00:23:43.211 05:21:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:43.211 05:21:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:43.211 05:21:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:43.471 /dev/nbd1 00:23:43.471 05:21:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:43.471 05:21:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:43.471 05:21:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:43.471 05:21:02 -- common/autotest_common.sh@857 -- # local i 00:23:43.471 05:21:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:43.471 05:21:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:43.471 05:21:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:43.471 05:21:02 -- common/autotest_common.sh@861 -- # break 00:23:43.471 05:21:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:43.471 05:21:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:43.471 05:21:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:43.471 1+0 records in 00:23:43.471 1+0 records out 00:23:43.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275317 s, 14.9 MB/s 00:23:43.471 05:21:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:43.471 05:21:02 -- common/autotest_common.sh@874 -- # size=4096 00:23:43.471 05:21:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:43.471 05:21:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:43.471 05:21:02 -- common/autotest_common.sh@877 -- # return 0 00:23:43.471 05:21:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:43.471 05:21:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:43.471 05:21:02 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:43.729 05:21:02 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:43.729 05:21:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:43.729 05:21:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:43.729 05:21:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:43.729 05:21:02 -- bdev/nbd_common.sh@51 -- # local i 00:23:43.729 05:21:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:43.729 05:21:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:43.988 05:21:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:43.988 05:21:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:43.988 05:21:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:43.988 05:21:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:43.988 05:21:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:43.988 05:21:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:43.988 05:21:02 -- bdev/nbd_common.sh@41 -- # break 00:23:43.988 05:21:02 -- bdev/nbd_common.sh@45 -- # return 0 00:23:43.988 05:21:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:43.988 05:21:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:44.247 05:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:44.247 05:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:44.247 05:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:44.247 05:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:44.247 05:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:44.247 05:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:44.247 05:21:03 -- bdev/nbd_common.sh@41 -- # break 00:23:44.247 05:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:23:44.247 05:21:03 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:44.247 05:21:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:44.247 05:21:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:44.247 05:21:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:44.247 05:21:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:44.506 [2024-07-26 05:21:03.500986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:44.506 [2024-07-26 05:21:03.501109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.506 [2024-07-26 05:21:03.501140] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:23:44.506 [2024-07-26 05:21:03.501172] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.506 [2024-07-26 05:21:03.503660] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.506 [2024-07-26 05:21:03.503719] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:44.506 [2024-07-26 05:21:03.503811] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:44.506 [2024-07-26 05:21:03.503872] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:44.506 BaseBdev1 00:23:44.506 05:21:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:44.506 05:21:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:23:44.506 05:21:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:23:44.764 05:21:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:44.764 [2024-07-26 05:21:03.873084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:44.764 [2024-07-26 05:21:03.873417] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.764 [2024-07-26 05:21:03.873457] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:23:44.764 [2024-07-26 05:21:03.873475] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.764 [2024-07-26 05:21:03.873980] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.764 [2024-07-26 05:21:03.874023] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:44.764 [2024-07-26 05:21:03.874112] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:23:44.764 [2024-07-26 05:21:03.874131] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:23:44.764 [2024-07-26 05:21:03.874141] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:44.764 [2024-07-26 05:21:03.874168] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ae80 name raid_bdev1, state configuring 00:23:44.764 [2024-07-26 05:21:03.874269] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:45.023 BaseBdev2 00:23:45.023 05:21:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:45.023 05:21:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:23:45.023 05:21:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:45.023 05:21:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:45.282 [2024-07-26 05:21:04.260261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:45.282 [2024-07-26 05:21:04.260331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.282 [2024-07-26 05:21:04.260366] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:23:45.282 [2024-07-26 05:21:04.260378] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.282 [2024-07-26 05:21:04.260776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.282 [2024-07-26 05:21:04.260798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:45.282 [2024-07-26 05:21:04.260881] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:45.282 [2024-07-26 05:21:04.260907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:45.282 BaseBdev3 00:23:45.282 05:21:04 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:45.541 05:21:04 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:45.541 [2024-07-26 05:21:04.624338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:45.541 [2024-07-26 05:21:04.624392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.541 [2024-07-26 05:21:04.624421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:23:45.541 [2024-07-26 05:21:04.624434] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.541 [2024-07-26 05:21:04.624854] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.541 [2024-07-26 05:21:04.624875] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:45.541 [2024-07-26 05:21:04.624977] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:45.541 [2024-07-26 05:21:04.625198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:45.541 spare 00:23:45.541 05:21:04 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:45.541 05:21:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.800 [2024-07-26 05:21:04.725425] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000b480 00:23:45.800 [2024-07-26 05:21:04.725449] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:45.800 [2024-07-26 05:21:04.725573] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000046fb0 00:23:45.800 [2024-07-26 05:21:04.729811] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000b480 00:23:45.800 [2024-07-26 05:21:04.729984] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000b480 00:23:45.800 [2024-07-26 05:21:04.730287] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.800 "name": "raid_bdev1", 00:23:45.800 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:45.800 "strip_size_kb": 64, 00:23:45.800 "state": "online", 00:23:45.800 "raid_level": "raid5f", 00:23:45.800 "superblock": true, 00:23:45.800 "num_base_bdevs": 3, 00:23:45.800 "num_base_bdevs_discovered": 3, 00:23:45.800 "num_base_bdevs_operational": 3, 00:23:45.800 "base_bdevs_list": [ 00:23:45.800 { 00:23:45.800 "name": "spare", 00:23:45.800 "uuid": "4a9a0765-de85-5b4b-a9c7-f98c78f63981", 00:23:45.800 "is_configured": true, 00:23:45.800 "data_offset": 2048, 00:23:45.800 "data_size": 63488 00:23:45.800 }, 00:23:45.800 { 00:23:45.800 "name": "BaseBdev2", 00:23:45.800 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:45.800 "is_configured": true, 00:23:45.800 "data_offset": 2048, 00:23:45.800 "data_size": 63488 00:23:45.800 }, 00:23:45.800 { 00:23:45.800 "name": "BaseBdev3", 00:23:45.800 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:45.800 "is_configured": true, 00:23:45.800 "data_offset": 2048, 00:23:45.800 "data_size": 63488 00:23:45.800 } 00:23:45.800 ] 00:23:45.800 }' 00:23:45.800 05:21:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.800 05:21:04 -- common/autotest_common.sh@10 -- # set +x 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:46.368 "name": "raid_bdev1", 00:23:46.368 "uuid": "1c34d360-f6c9-4cdc-a800-bc09d2342471", 00:23:46.368 "strip_size_kb": 64, 00:23:46.368 "state": "online", 00:23:46.368 "raid_level": "raid5f", 00:23:46.368 "superblock": true, 00:23:46.368 "num_base_bdevs": 3, 00:23:46.368 "num_base_bdevs_discovered": 3, 00:23:46.368 "num_base_bdevs_operational": 3, 00:23:46.368 "base_bdevs_list": [ 00:23:46.368 { 00:23:46.368 "name": "spare", 00:23:46.368 "uuid": "4a9a0765-de85-5b4b-a9c7-f98c78f63981", 00:23:46.368 "is_configured": true, 00:23:46.368 "data_offset": 2048, 00:23:46.368 "data_size": 63488 00:23:46.368 }, 00:23:46.368 { 00:23:46.368 "name": "BaseBdev2", 00:23:46.368 "uuid": "9226e872-1753-5400-9e35-938d71973553", 00:23:46.368 "is_configured": true, 00:23:46.368 "data_offset": 2048, 00:23:46.368 "data_size": 63488 00:23:46.368 }, 00:23:46.368 { 00:23:46.368 "name": "BaseBdev3", 00:23:46.368 "uuid": "8bc0389c-f342-5dae-aebe-eea89f20b4c1", 00:23:46.368 "is_configured": true, 00:23:46.368 "data_offset": 2048, 00:23:46.368 "data_size": 63488 00:23:46.368 } 00:23:46.368 ] 00:23:46.368 }' 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:46.368 05:21:05 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.627 05:21:05 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:46.627 05:21:05 -- bdev/bdev_raid.sh@709 -- # killprocess 83954 00:23:46.627 05:21:05 -- common/autotest_common.sh@926 -- # '[' -z 83954 ']' 00:23:46.627 05:21:05 -- common/autotest_common.sh@930 -- # kill -0 83954 00:23:46.627 05:21:05 -- common/autotest_common.sh@931 -- # uname 00:23:46.627 05:21:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:46.627 05:21:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83954 00:23:46.627 killing process with pid 83954 00:23:46.627 Received shutdown signal, test time was about 60.000000 seconds 00:23:46.627 00:23:46.627 Latency(us) 00:23:46.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.627 =================================================================================================================== 00:23:46.627 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:46.627 05:21:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:46.627 05:21:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:46.627 05:21:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83954' 00:23:46.627 05:21:05 -- common/autotest_common.sh@945 -- # kill 83954 00:23:46.627 [2024-07-26 05:21:05.691847] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:46.627 05:21:05 -- common/autotest_common.sh@950 -- # wait 83954 00:23:46.627 [2024-07-26 05:21:05.691926] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.627 [2024-07-26 05:21:05.692026] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.627 [2024-07-26 05:21:05.692055] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b480 name raid_bdev1, state offline 00:23:46.885 [2024-07-26 05:21:05.942484] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:47.822 00:23:47.822 real 0m20.584s 00:23:47.822 user 0m30.079s 00:23:47.822 sys 0m2.727s 00:23:47.822 05:21:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.822 05:21:06 -- common/autotest_common.sh@10 -- # set +x 00:23:47.822 ************************************ 00:23:47.822 END TEST raid5f_rebuild_test_sb 00:23:47.822 ************************************ 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:23:47.822 05:21:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:47.822 05:21:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:47.822 05:21:06 -- common/autotest_common.sh@10 -- # set +x 00:23:47.822 ************************************ 00:23:47.822 START TEST raid5f_state_function_test 00:23:47.822 ************************************ 00:23:47.822 05:21:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=84514 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:47.822 Process raid pid: 84514 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 84514' 00:23:47.822 05:21:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 84514 /var/tmp/spdk-raid.sock 00:23:47.822 05:21:06 -- common/autotest_common.sh@819 -- # '[' -z 84514 ']' 00:23:47.822 05:21:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:47.822 05:21:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:47.822 05:21:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:47.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:47.822 05:21:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:47.822 05:21:06 -- common/autotest_common.sh@10 -- # set +x 00:23:48.081 [2024-07-26 05:21:06.984312] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:48.081 [2024-07-26 05:21:06.984653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.081 [2024-07-26 05:21:07.156480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.342 [2024-07-26 05:21:07.303104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.342 [2024-07-26 05:21:07.449279] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:48.946 05:21:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:48.946 05:21:07 -- common/autotest_common.sh@852 -- # return 0 00:23:48.946 05:21:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:49.205 [2024-07-26 05:21:08.058528] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:49.205 [2024-07-26 05:21:08.059197] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:49.205 [2024-07-26 05:21:08.059227] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:49.205 [2024-07-26 05:21:08.059252] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:49.205 [2024-07-26 05:21:08.059263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:49.205 [2024-07-26 05:21:08.059277] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:49.205 [2024-07-26 05:21:08.059285] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:49.205 [2024-07-26 05:21:08.059298] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.205 05:21:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:49.205 "name": "Existed_Raid", 00:23:49.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.205 "strip_size_kb": 64, 00:23:49.205 "state": "configuring", 00:23:49.205 "raid_level": "raid5f", 00:23:49.206 "superblock": false, 00:23:49.206 "num_base_bdevs": 4, 00:23:49.206 "num_base_bdevs_discovered": 0, 00:23:49.206 "num_base_bdevs_operational": 4, 00:23:49.206 "base_bdevs_list": [ 00:23:49.206 { 00:23:49.206 "name": "BaseBdev1", 00:23:49.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.206 "is_configured": false, 00:23:49.206 "data_offset": 0, 00:23:49.206 "data_size": 0 00:23:49.206 }, 00:23:49.206 { 00:23:49.206 "name": "BaseBdev2", 00:23:49.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.206 "is_configured": false, 00:23:49.206 "data_offset": 0, 00:23:49.206 "data_size": 0 00:23:49.206 }, 00:23:49.206 { 00:23:49.206 "name": "BaseBdev3", 00:23:49.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.206 "is_configured": false, 00:23:49.206 "data_offset": 0, 00:23:49.206 "data_size": 0 00:23:49.206 }, 00:23:49.206 { 00:23:49.206 "name": "BaseBdev4", 00:23:49.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.206 "is_configured": false, 00:23:49.206 "data_offset": 0, 00:23:49.206 "data_size": 0 00:23:49.206 } 00:23:49.206 ] 00:23:49.206 }' 00:23:49.206 05:21:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:49.206 05:21:08 -- common/autotest_common.sh@10 -- # set +x 00:23:49.465 05:21:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:49.724 [2024-07-26 05:21:08.726631] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:49.724 [2024-07-26 05:21:08.726673] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:23:49.724 05:21:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:49.984 [2024-07-26 05:21:08.914751] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:49.984 [2024-07-26 05:21:08.914970] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:49.984 [2024-07-26 05:21:08.914996] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:49.984 [2024-07-26 05:21:08.915043] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:49.984 [2024-07-26 05:21:08.915056] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:49.984 [2024-07-26 05:21:08.915071] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:49.984 [2024-07-26 05:21:08.915079] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:49.984 [2024-07-26 05:21:08.915107] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:49.984 05:21:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:50.243 [2024-07-26 05:21:09.126476] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:50.243 BaseBdev1 00:23:50.243 05:21:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:50.243 05:21:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:50.243 05:21:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:50.243 05:21:09 -- common/autotest_common.sh@889 -- # local i 00:23:50.243 05:21:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:50.243 05:21:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:50.243 05:21:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:50.243 05:21:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:50.502 [ 00:23:50.502 { 00:23:50.502 "name": "BaseBdev1", 00:23:50.502 "aliases": [ 00:23:50.502 "c5c7ad7b-ea97-4dae-963f-7c04b228121c" 00:23:50.502 ], 00:23:50.502 "product_name": "Malloc disk", 00:23:50.502 "block_size": 512, 00:23:50.502 "num_blocks": 65536, 00:23:50.502 "uuid": "c5c7ad7b-ea97-4dae-963f-7c04b228121c", 00:23:50.502 "assigned_rate_limits": { 00:23:50.502 "rw_ios_per_sec": 0, 00:23:50.502 "rw_mbytes_per_sec": 0, 00:23:50.502 "r_mbytes_per_sec": 0, 00:23:50.502 "w_mbytes_per_sec": 0 00:23:50.502 }, 00:23:50.502 "claimed": true, 00:23:50.502 "claim_type": "exclusive_write", 00:23:50.502 "zoned": false, 00:23:50.502 "supported_io_types": { 00:23:50.502 "read": true, 00:23:50.502 "write": true, 00:23:50.502 "unmap": true, 00:23:50.502 "write_zeroes": true, 00:23:50.502 "flush": true, 00:23:50.502 "reset": true, 00:23:50.502 "compare": false, 00:23:50.502 "compare_and_write": false, 00:23:50.502 "abort": true, 00:23:50.502 "nvme_admin": false, 00:23:50.502 "nvme_io": false 00:23:50.502 }, 00:23:50.502 "memory_domains": [ 00:23:50.502 { 00:23:50.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.503 "dma_device_type": 2 00:23:50.503 } 00:23:50.503 ], 00:23:50.503 "driver_specific": {} 00:23:50.503 } 00:23:50.503 ] 00:23:50.503 05:21:09 -- common/autotest_common.sh@895 -- # return 0 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.503 05:21:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.762 05:21:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:50.762 "name": "Existed_Raid", 00:23:50.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.762 "strip_size_kb": 64, 00:23:50.762 "state": "configuring", 00:23:50.762 "raid_level": "raid5f", 00:23:50.762 "superblock": false, 00:23:50.762 "num_base_bdevs": 4, 00:23:50.762 "num_base_bdevs_discovered": 1, 00:23:50.762 "num_base_bdevs_operational": 4, 00:23:50.762 "base_bdevs_list": [ 00:23:50.762 { 00:23:50.762 "name": "BaseBdev1", 00:23:50.762 "uuid": "c5c7ad7b-ea97-4dae-963f-7c04b228121c", 00:23:50.762 "is_configured": true, 00:23:50.762 "data_offset": 0, 00:23:50.762 "data_size": 65536 00:23:50.762 }, 00:23:50.762 { 00:23:50.762 "name": "BaseBdev2", 00:23:50.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.762 "is_configured": false, 00:23:50.762 "data_offset": 0, 00:23:50.762 "data_size": 0 00:23:50.762 }, 00:23:50.762 { 00:23:50.762 "name": "BaseBdev3", 00:23:50.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.762 "is_configured": false, 00:23:50.762 "data_offset": 0, 00:23:50.762 "data_size": 0 00:23:50.762 }, 00:23:50.762 { 00:23:50.762 "name": "BaseBdev4", 00:23:50.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.762 "is_configured": false, 00:23:50.762 "data_offset": 0, 00:23:50.762 "data_size": 0 00:23:50.762 } 00:23:50.762 ] 00:23:50.762 }' 00:23:50.762 05:21:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:50.762 05:21:09 -- common/autotest_common.sh@10 -- # set +x 00:23:51.021 05:21:09 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:51.280 [2024-07-26 05:21:10.230793] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:51.280 [2024-07-26 05:21:10.231020] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:23:51.280 05:21:10 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:23:51.281 05:21:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:51.540 [2024-07-26 05:21:10.422836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:51.540 [2024-07-26 05:21:10.424801] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:51.540 [2024-07-26 05:21:10.424852] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:51.540 [2024-07-26 05:21:10.424866] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:51.540 [2024-07-26 05:21:10.424880] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:51.540 [2024-07-26 05:21:10.424888] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:51.540 [2024-07-26 05:21:10.424901] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:51.540 "name": "Existed_Raid", 00:23:51.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.540 "strip_size_kb": 64, 00:23:51.540 "state": "configuring", 00:23:51.540 "raid_level": "raid5f", 00:23:51.540 "superblock": false, 00:23:51.540 "num_base_bdevs": 4, 00:23:51.540 "num_base_bdevs_discovered": 1, 00:23:51.540 "num_base_bdevs_operational": 4, 00:23:51.540 "base_bdevs_list": [ 00:23:51.540 { 00:23:51.540 "name": "BaseBdev1", 00:23:51.540 "uuid": "c5c7ad7b-ea97-4dae-963f-7c04b228121c", 00:23:51.540 "is_configured": true, 00:23:51.540 "data_offset": 0, 00:23:51.540 "data_size": 65536 00:23:51.540 }, 00:23:51.540 { 00:23:51.540 "name": "BaseBdev2", 00:23:51.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.540 "is_configured": false, 00:23:51.540 "data_offset": 0, 00:23:51.540 "data_size": 0 00:23:51.540 }, 00:23:51.540 { 00:23:51.540 "name": "BaseBdev3", 00:23:51.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.540 "is_configured": false, 00:23:51.540 "data_offset": 0, 00:23:51.540 "data_size": 0 00:23:51.540 }, 00:23:51.540 { 00:23:51.540 "name": "BaseBdev4", 00:23:51.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.540 "is_configured": false, 00:23:51.540 "data_offset": 0, 00:23:51.540 "data_size": 0 00:23:51.540 } 00:23:51.540 ] 00:23:51.540 }' 00:23:51.540 05:21:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:51.540 05:21:10 -- common/autotest_common.sh@10 -- # set +x 00:23:52.109 05:21:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:52.109 [2024-07-26 05:21:11.135817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:52.109 BaseBdev2 00:23:52.109 05:21:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:52.109 05:21:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:52.109 05:21:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:52.109 05:21:11 -- common/autotest_common.sh@889 -- # local i 00:23:52.109 05:21:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:52.109 05:21:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:52.109 05:21:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:52.368 05:21:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:52.627 [ 00:23:52.627 { 00:23:52.627 "name": "BaseBdev2", 00:23:52.627 "aliases": [ 00:23:52.627 "a54e6641-f91d-4c13-9270-d2186cb0b9dc" 00:23:52.627 ], 00:23:52.627 "product_name": "Malloc disk", 00:23:52.627 "block_size": 512, 00:23:52.627 "num_blocks": 65536, 00:23:52.627 "uuid": "a54e6641-f91d-4c13-9270-d2186cb0b9dc", 00:23:52.627 "assigned_rate_limits": { 00:23:52.627 "rw_ios_per_sec": 0, 00:23:52.627 "rw_mbytes_per_sec": 0, 00:23:52.627 "r_mbytes_per_sec": 0, 00:23:52.627 "w_mbytes_per_sec": 0 00:23:52.627 }, 00:23:52.627 "claimed": true, 00:23:52.627 "claim_type": "exclusive_write", 00:23:52.627 "zoned": false, 00:23:52.627 "supported_io_types": { 00:23:52.627 "read": true, 00:23:52.627 "write": true, 00:23:52.627 "unmap": true, 00:23:52.627 "write_zeroes": true, 00:23:52.627 "flush": true, 00:23:52.627 "reset": true, 00:23:52.627 "compare": false, 00:23:52.627 "compare_and_write": false, 00:23:52.627 "abort": true, 00:23:52.627 "nvme_admin": false, 00:23:52.627 "nvme_io": false 00:23:52.627 }, 00:23:52.627 "memory_domains": [ 00:23:52.627 { 00:23:52.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.627 "dma_device_type": 2 00:23:52.627 } 00:23:52.627 ], 00:23:52.627 "driver_specific": {} 00:23:52.627 } 00:23:52.627 ] 00:23:52.627 05:21:11 -- common/autotest_common.sh@895 -- # return 0 00:23:52.627 05:21:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:52.627 05:21:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:52.627 05:21:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:52.627 05:21:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:52.627 05:21:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:52.627 05:21:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:52.628 05:21:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:52.628 05:21:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:52.628 05:21:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:52.628 05:21:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:52.628 05:21:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:52.628 05:21:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:52.628 05:21:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.628 05:21:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:52.887 05:21:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:52.887 "name": "Existed_Raid", 00:23:52.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.887 "strip_size_kb": 64, 00:23:52.887 "state": "configuring", 00:23:52.887 "raid_level": "raid5f", 00:23:52.887 "superblock": false, 00:23:52.887 "num_base_bdevs": 4, 00:23:52.887 "num_base_bdevs_discovered": 2, 00:23:52.887 "num_base_bdevs_operational": 4, 00:23:52.887 "base_bdevs_list": [ 00:23:52.887 { 00:23:52.887 "name": "BaseBdev1", 00:23:52.887 "uuid": "c5c7ad7b-ea97-4dae-963f-7c04b228121c", 00:23:52.887 "is_configured": true, 00:23:52.887 "data_offset": 0, 00:23:52.887 "data_size": 65536 00:23:52.887 }, 00:23:52.887 { 00:23:52.887 "name": "BaseBdev2", 00:23:52.887 "uuid": "a54e6641-f91d-4c13-9270-d2186cb0b9dc", 00:23:52.887 "is_configured": true, 00:23:52.887 "data_offset": 0, 00:23:52.887 "data_size": 65536 00:23:52.887 }, 00:23:52.887 { 00:23:52.887 "name": "BaseBdev3", 00:23:52.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.887 "is_configured": false, 00:23:52.887 "data_offset": 0, 00:23:52.887 "data_size": 0 00:23:52.887 }, 00:23:52.887 { 00:23:52.887 "name": "BaseBdev4", 00:23:52.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.887 "is_configured": false, 00:23:52.887 "data_offset": 0, 00:23:52.887 "data_size": 0 00:23:52.887 } 00:23:52.887 ] 00:23:52.887 }' 00:23:52.887 05:21:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:52.887 05:21:11 -- common/autotest_common.sh@10 -- # set +x 00:23:53.146 05:21:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:53.146 [2024-07-26 05:21:12.212217] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:53.146 BaseBdev3 00:23:53.146 05:21:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:53.146 05:21:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:53.146 05:21:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:53.146 05:21:12 -- common/autotest_common.sh@889 -- # local i 00:23:53.146 05:21:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:53.146 05:21:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:53.146 05:21:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:53.406 05:21:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:53.666 [ 00:23:53.666 { 00:23:53.666 "name": "BaseBdev3", 00:23:53.666 "aliases": [ 00:23:53.666 "fe730c2e-b416-4a6f-96fc-157645e9e771" 00:23:53.666 ], 00:23:53.666 "product_name": "Malloc disk", 00:23:53.666 "block_size": 512, 00:23:53.666 "num_blocks": 65536, 00:23:53.666 "uuid": "fe730c2e-b416-4a6f-96fc-157645e9e771", 00:23:53.666 "assigned_rate_limits": { 00:23:53.666 "rw_ios_per_sec": 0, 00:23:53.666 "rw_mbytes_per_sec": 0, 00:23:53.666 "r_mbytes_per_sec": 0, 00:23:53.666 "w_mbytes_per_sec": 0 00:23:53.666 }, 00:23:53.666 "claimed": true, 00:23:53.666 "claim_type": "exclusive_write", 00:23:53.666 "zoned": false, 00:23:53.666 "supported_io_types": { 00:23:53.666 "read": true, 00:23:53.666 "write": true, 00:23:53.666 "unmap": true, 00:23:53.666 "write_zeroes": true, 00:23:53.666 "flush": true, 00:23:53.666 "reset": true, 00:23:53.666 "compare": false, 00:23:53.666 "compare_and_write": false, 00:23:53.666 "abort": true, 00:23:53.666 "nvme_admin": false, 00:23:53.666 "nvme_io": false 00:23:53.666 }, 00:23:53.666 "memory_domains": [ 00:23:53.666 { 00:23:53.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.666 "dma_device_type": 2 00:23:53.666 } 00:23:53.666 ], 00:23:53.666 "driver_specific": {} 00:23:53.666 } 00:23:53.666 ] 00:23:53.666 05:21:12 -- common/autotest_common.sh@895 -- # return 0 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.666 05:21:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:53.925 05:21:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:53.925 "name": "Existed_Raid", 00:23:53.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.925 "strip_size_kb": 64, 00:23:53.925 "state": "configuring", 00:23:53.925 "raid_level": "raid5f", 00:23:53.925 "superblock": false, 00:23:53.925 "num_base_bdevs": 4, 00:23:53.925 "num_base_bdevs_discovered": 3, 00:23:53.925 "num_base_bdevs_operational": 4, 00:23:53.925 "base_bdevs_list": [ 00:23:53.925 { 00:23:53.925 "name": "BaseBdev1", 00:23:53.925 "uuid": "c5c7ad7b-ea97-4dae-963f-7c04b228121c", 00:23:53.925 "is_configured": true, 00:23:53.925 "data_offset": 0, 00:23:53.925 "data_size": 65536 00:23:53.925 }, 00:23:53.925 { 00:23:53.925 "name": "BaseBdev2", 00:23:53.925 "uuid": "a54e6641-f91d-4c13-9270-d2186cb0b9dc", 00:23:53.925 "is_configured": true, 00:23:53.925 "data_offset": 0, 00:23:53.925 "data_size": 65536 00:23:53.925 }, 00:23:53.925 { 00:23:53.925 "name": "BaseBdev3", 00:23:53.926 "uuid": "fe730c2e-b416-4a6f-96fc-157645e9e771", 00:23:53.926 "is_configured": true, 00:23:53.926 "data_offset": 0, 00:23:53.926 "data_size": 65536 00:23:53.926 }, 00:23:53.926 { 00:23:53.926 "name": "BaseBdev4", 00:23:53.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.926 "is_configured": false, 00:23:53.926 "data_offset": 0, 00:23:53.926 "data_size": 0 00:23:53.926 } 00:23:53.926 ] 00:23:53.926 }' 00:23:53.926 05:21:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:53.926 05:21:12 -- common/autotest_common.sh@10 -- # set +x 00:23:54.185 05:21:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:54.460 [2024-07-26 05:21:13.431119] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:54.460 [2024-07-26 05:21:13.431421] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:23:54.460 [2024-07-26 05:21:13.431478] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:54.460 [2024-07-26 05:21:13.431694] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:23:54.460 [2024-07-26 05:21:13.437252] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:23:54.460 [2024-07-26 05:21:13.437416] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:23:54.460 [2024-07-26 05:21:13.437801] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.460 BaseBdev4 00:23:54.460 05:21:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:23:54.460 05:21:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:23:54.460 05:21:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:54.460 05:21:13 -- common/autotest_common.sh@889 -- # local i 00:23:54.460 05:21:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:54.460 05:21:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:54.460 05:21:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:54.722 05:21:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:54.980 [ 00:23:54.980 { 00:23:54.980 "name": "BaseBdev4", 00:23:54.980 "aliases": [ 00:23:54.980 "51d9716b-2ca0-4630-a8bc-46250726c7ef" 00:23:54.980 ], 00:23:54.980 "product_name": "Malloc disk", 00:23:54.980 "block_size": 512, 00:23:54.980 "num_blocks": 65536, 00:23:54.980 "uuid": "51d9716b-2ca0-4630-a8bc-46250726c7ef", 00:23:54.980 "assigned_rate_limits": { 00:23:54.980 "rw_ios_per_sec": 0, 00:23:54.980 "rw_mbytes_per_sec": 0, 00:23:54.980 "r_mbytes_per_sec": 0, 00:23:54.980 "w_mbytes_per_sec": 0 00:23:54.980 }, 00:23:54.980 "claimed": true, 00:23:54.980 "claim_type": "exclusive_write", 00:23:54.980 "zoned": false, 00:23:54.980 "supported_io_types": { 00:23:54.980 "read": true, 00:23:54.980 "write": true, 00:23:54.980 "unmap": true, 00:23:54.980 "write_zeroes": true, 00:23:54.980 "flush": true, 00:23:54.980 "reset": true, 00:23:54.980 "compare": false, 00:23:54.980 "compare_and_write": false, 00:23:54.980 "abort": true, 00:23:54.980 "nvme_admin": false, 00:23:54.980 "nvme_io": false 00:23:54.980 }, 00:23:54.980 "memory_domains": [ 00:23:54.980 { 00:23:54.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.980 "dma_device_type": 2 00:23:54.980 } 00:23:54.980 ], 00:23:54.980 "driver_specific": {} 00:23:54.980 } 00:23:54.980 ] 00:23:54.980 05:21:13 -- common/autotest_common.sh@895 -- # return 0 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.980 05:21:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.980 05:21:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:54.980 "name": "Existed_Raid", 00:23:54.980 "uuid": "b9c42d5e-f1d9-41c4-ab29-8fc962f9bb7e", 00:23:54.980 "strip_size_kb": 64, 00:23:54.980 "state": "online", 00:23:54.980 "raid_level": "raid5f", 00:23:54.980 "superblock": false, 00:23:54.980 "num_base_bdevs": 4, 00:23:54.980 "num_base_bdevs_discovered": 4, 00:23:54.980 "num_base_bdevs_operational": 4, 00:23:54.980 "base_bdevs_list": [ 00:23:54.980 { 00:23:54.980 "name": "BaseBdev1", 00:23:54.980 "uuid": "c5c7ad7b-ea97-4dae-963f-7c04b228121c", 00:23:54.980 "is_configured": true, 00:23:54.980 "data_offset": 0, 00:23:54.980 "data_size": 65536 00:23:54.980 }, 00:23:54.980 { 00:23:54.980 "name": "BaseBdev2", 00:23:54.980 "uuid": "a54e6641-f91d-4c13-9270-d2186cb0b9dc", 00:23:54.980 "is_configured": true, 00:23:54.980 "data_offset": 0, 00:23:54.980 "data_size": 65536 00:23:54.980 }, 00:23:54.980 { 00:23:54.980 "name": "BaseBdev3", 00:23:54.980 "uuid": "fe730c2e-b416-4a6f-96fc-157645e9e771", 00:23:54.980 "is_configured": true, 00:23:54.980 "data_offset": 0, 00:23:54.980 "data_size": 65536 00:23:54.980 }, 00:23:54.980 { 00:23:54.980 "name": "BaseBdev4", 00:23:54.980 "uuid": "51d9716b-2ca0-4630-a8bc-46250726c7ef", 00:23:54.980 "is_configured": true, 00:23:54.980 "data_offset": 0, 00:23:54.980 "data_size": 65536 00:23:54.980 } 00:23:54.980 ] 00:23:54.980 }' 00:23:54.980 05:21:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:54.980 05:21:14 -- common/autotest_common.sh@10 -- # set +x 00:23:55.239 05:21:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:55.498 [2024-07-26 05:21:14.567706] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:55.757 05:21:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:55.757 05:21:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:55.757 05:21:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:55.757 05:21:14 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:55.757 05:21:14 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:55.758 "name": "Existed_Raid", 00:23:55.758 "uuid": "b9c42d5e-f1d9-41c4-ab29-8fc962f9bb7e", 00:23:55.758 "strip_size_kb": 64, 00:23:55.758 "state": "online", 00:23:55.758 "raid_level": "raid5f", 00:23:55.758 "superblock": false, 00:23:55.758 "num_base_bdevs": 4, 00:23:55.758 "num_base_bdevs_discovered": 3, 00:23:55.758 "num_base_bdevs_operational": 3, 00:23:55.758 "base_bdevs_list": [ 00:23:55.758 { 00:23:55.758 "name": null, 00:23:55.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.758 "is_configured": false, 00:23:55.758 "data_offset": 0, 00:23:55.758 "data_size": 65536 00:23:55.758 }, 00:23:55.758 { 00:23:55.758 "name": "BaseBdev2", 00:23:55.758 "uuid": "a54e6641-f91d-4c13-9270-d2186cb0b9dc", 00:23:55.758 "is_configured": true, 00:23:55.758 "data_offset": 0, 00:23:55.758 "data_size": 65536 00:23:55.758 }, 00:23:55.758 { 00:23:55.758 "name": "BaseBdev3", 00:23:55.758 "uuid": "fe730c2e-b416-4a6f-96fc-157645e9e771", 00:23:55.758 "is_configured": true, 00:23:55.758 "data_offset": 0, 00:23:55.758 "data_size": 65536 00:23:55.758 }, 00:23:55.758 { 00:23:55.758 "name": "BaseBdev4", 00:23:55.758 "uuid": "51d9716b-2ca0-4630-a8bc-46250726c7ef", 00:23:55.758 "is_configured": true, 00:23:55.758 "data_offset": 0, 00:23:55.758 "data_size": 65536 00:23:55.758 } 00:23:55.758 ] 00:23:55.758 }' 00:23:55.758 05:21:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:55.758 05:21:14 -- common/autotest_common.sh@10 -- # set +x 00:23:56.325 05:21:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:56.325 05:21:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:56.325 05:21:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:56.325 05:21:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.325 05:21:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:56.325 05:21:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:56.325 05:21:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:56.584 [2024-07-26 05:21:15.506662] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:56.584 [2024-07-26 05:21:15.506696] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:56.584 [2024-07-26 05:21:15.506774] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:56.584 05:21:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:56.584 05:21:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:56.584 05:21:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.584 05:21:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:56.843 05:21:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:56.843 05:21:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:56.843 05:21:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:57.102 [2024-07-26 05:21:15.992685] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:57.102 05:21:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:57.102 05:21:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:57.102 05:21:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.102 05:21:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:57.360 05:21:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:57.360 05:21:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:57.360 05:21:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:57.360 [2024-07-26 05:21:16.430653] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:57.360 [2024-07-26 05:21:16.430934] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:23:57.619 05:21:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:57.619 05:21:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:57.620 05:21:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.620 05:21:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:57.620 05:21:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:57.620 05:21:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:57.620 05:21:16 -- bdev/bdev_raid.sh@287 -- # killprocess 84514 00:23:57.620 05:21:16 -- common/autotest_common.sh@926 -- # '[' -z 84514 ']' 00:23:57.620 05:21:16 -- common/autotest_common.sh@930 -- # kill -0 84514 00:23:57.620 05:21:16 -- common/autotest_common.sh@931 -- # uname 00:23:57.620 05:21:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:57.620 05:21:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84514 00:23:57.620 05:21:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:57.620 killing process with pid 84514 00:23:57.620 05:21:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:57.620 05:21:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84514' 00:23:57.620 05:21:16 -- common/autotest_common.sh@945 -- # kill 84514 00:23:57.620 [2024-07-26 05:21:16.721370] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:57.620 05:21:16 -- common/autotest_common.sh@950 -- # wait 84514 00:23:57.620 [2024-07-26 05:21:16.721495] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:58.557 05:21:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:58.557 00:23:58.557 real 0m10.728s 00:23:58.557 user 0m17.964s 00:23:58.557 sys 0m1.598s 00:23:58.557 ************************************ 00:23:58.557 END TEST raid5f_state_function_test 00:23:58.557 ************************************ 00:23:58.557 05:21:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.557 05:21:17 -- common/autotest_common.sh@10 -- # set +x 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:23:58.816 05:21:17 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:58.816 05:21:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:58.816 05:21:17 -- common/autotest_common.sh@10 -- # set +x 00:23:58.816 ************************************ 00:23:58.816 START TEST raid5f_state_function_test_sb 00:23:58.816 ************************************ 00:23:58.816 05:21:17 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=84897 00:23:58.816 Process raid pid: 84897 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 84897' 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 84897 /var/tmp/spdk-raid.sock 00:23:58.816 05:21:17 -- common/autotest_common.sh@819 -- # '[' -z 84897 ']' 00:23:58.816 05:21:17 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:58.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:58.816 05:21:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:58.816 05:21:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:58.816 05:21:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:58.816 05:21:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:58.816 05:21:17 -- common/autotest_common.sh@10 -- # set +x 00:23:58.816 [2024-07-26 05:21:17.759473] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:58.816 [2024-07-26 05:21:17.759800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.075 [2024-07-26 05:21:17.930514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.075 [2024-07-26 05:21:18.079719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.335 [2024-07-26 05:21:18.224454] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:59.593 05:21:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:59.593 05:21:18 -- common/autotest_common.sh@852 -- # return 0 00:23:59.593 05:21:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:59.852 [2024-07-26 05:21:18.870353] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:59.852 [2024-07-26 05:21:18.870444] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:59.852 [2024-07-26 05:21:18.870460] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:59.852 [2024-07-26 05:21:18.870474] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:59.852 [2024-07-26 05:21:18.870482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:59.852 [2024-07-26 05:21:18.870494] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:59.852 [2024-07-26 05:21:18.870502] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:59.852 [2024-07-26 05:21:18.870514] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.852 05:21:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.110 05:21:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.110 "name": "Existed_Raid", 00:24:00.110 "uuid": "43fe2e4e-6eb4-4da9-9b3e-fbcf6fa4e11d", 00:24:00.110 "strip_size_kb": 64, 00:24:00.110 "state": "configuring", 00:24:00.110 "raid_level": "raid5f", 00:24:00.110 "superblock": true, 00:24:00.110 "num_base_bdevs": 4, 00:24:00.111 "num_base_bdevs_discovered": 0, 00:24:00.111 "num_base_bdevs_operational": 4, 00:24:00.111 "base_bdevs_list": [ 00:24:00.111 { 00:24:00.111 "name": "BaseBdev1", 00:24:00.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.111 "is_configured": false, 00:24:00.111 "data_offset": 0, 00:24:00.111 "data_size": 0 00:24:00.111 }, 00:24:00.111 { 00:24:00.111 "name": "BaseBdev2", 00:24:00.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.111 "is_configured": false, 00:24:00.111 "data_offset": 0, 00:24:00.111 "data_size": 0 00:24:00.111 }, 00:24:00.111 { 00:24:00.111 "name": "BaseBdev3", 00:24:00.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.111 "is_configured": false, 00:24:00.111 "data_offset": 0, 00:24:00.111 "data_size": 0 00:24:00.111 }, 00:24:00.111 { 00:24:00.111 "name": "BaseBdev4", 00:24:00.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.111 "is_configured": false, 00:24:00.111 "data_offset": 0, 00:24:00.111 "data_size": 0 00:24:00.111 } 00:24:00.111 ] 00:24:00.111 }' 00:24:00.111 05:21:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.111 05:21:19 -- common/autotest_common.sh@10 -- # set +x 00:24:00.369 05:21:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:00.628 [2024-07-26 05:21:19.590346] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:00.628 [2024-07-26 05:21:19.590529] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:24:00.628 05:21:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:00.886 [2024-07-26 05:21:19.834441] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:00.886 [2024-07-26 05:21:19.834636] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:00.886 [2024-07-26 05:21:19.834661] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:00.886 [2024-07-26 05:21:19.834678] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:00.886 [2024-07-26 05:21:19.834687] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:00.886 [2024-07-26 05:21:19.834699] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:00.886 [2024-07-26 05:21:19.834708] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:00.886 [2024-07-26 05:21:19.834720] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:00.886 05:21:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:01.146 BaseBdev1 00:24:01.146 [2024-07-26 05:21:20.099133] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:01.146 05:21:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:01.146 05:21:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:01.146 05:21:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:01.146 05:21:20 -- common/autotest_common.sh@889 -- # local i 00:24:01.146 05:21:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:01.146 05:21:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:01.146 05:21:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:01.405 05:21:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:01.405 [ 00:24:01.405 { 00:24:01.405 "name": "BaseBdev1", 00:24:01.405 "aliases": [ 00:24:01.405 "5641325b-fbab-46c0-8311-e02efaca255a" 00:24:01.405 ], 00:24:01.405 "product_name": "Malloc disk", 00:24:01.405 "block_size": 512, 00:24:01.405 "num_blocks": 65536, 00:24:01.405 "uuid": "5641325b-fbab-46c0-8311-e02efaca255a", 00:24:01.405 "assigned_rate_limits": { 00:24:01.405 "rw_ios_per_sec": 0, 00:24:01.405 "rw_mbytes_per_sec": 0, 00:24:01.405 "r_mbytes_per_sec": 0, 00:24:01.405 "w_mbytes_per_sec": 0 00:24:01.405 }, 00:24:01.405 "claimed": true, 00:24:01.405 "claim_type": "exclusive_write", 00:24:01.405 "zoned": false, 00:24:01.405 "supported_io_types": { 00:24:01.405 "read": true, 00:24:01.405 "write": true, 00:24:01.405 "unmap": true, 00:24:01.405 "write_zeroes": true, 00:24:01.405 "flush": true, 00:24:01.405 "reset": true, 00:24:01.405 "compare": false, 00:24:01.405 "compare_and_write": false, 00:24:01.405 "abort": true, 00:24:01.405 "nvme_admin": false, 00:24:01.405 "nvme_io": false 00:24:01.405 }, 00:24:01.405 "memory_domains": [ 00:24:01.405 { 00:24:01.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.405 "dma_device_type": 2 00:24:01.405 } 00:24:01.405 ], 00:24:01.405 "driver_specific": {} 00:24:01.405 } 00:24:01.405 ] 00:24:01.405 05:21:20 -- common/autotest_common.sh@895 -- # return 0 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.405 05:21:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:01.663 05:21:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:01.663 "name": "Existed_Raid", 00:24:01.664 "uuid": "3dec5afb-fe49-41d0-9952-932a9a191436", 00:24:01.664 "strip_size_kb": 64, 00:24:01.664 "state": "configuring", 00:24:01.664 "raid_level": "raid5f", 00:24:01.664 "superblock": true, 00:24:01.664 "num_base_bdevs": 4, 00:24:01.664 "num_base_bdevs_discovered": 1, 00:24:01.664 "num_base_bdevs_operational": 4, 00:24:01.664 "base_bdevs_list": [ 00:24:01.664 { 00:24:01.664 "name": "BaseBdev1", 00:24:01.664 "uuid": "5641325b-fbab-46c0-8311-e02efaca255a", 00:24:01.664 "is_configured": true, 00:24:01.664 "data_offset": 2048, 00:24:01.664 "data_size": 63488 00:24:01.664 }, 00:24:01.664 { 00:24:01.664 "name": "BaseBdev2", 00:24:01.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.664 "is_configured": false, 00:24:01.664 "data_offset": 0, 00:24:01.664 "data_size": 0 00:24:01.664 }, 00:24:01.664 { 00:24:01.664 "name": "BaseBdev3", 00:24:01.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.664 "is_configured": false, 00:24:01.664 "data_offset": 0, 00:24:01.664 "data_size": 0 00:24:01.664 }, 00:24:01.664 { 00:24:01.664 "name": "BaseBdev4", 00:24:01.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.664 "is_configured": false, 00:24:01.664 "data_offset": 0, 00:24:01.664 "data_size": 0 00:24:01.664 } 00:24:01.664 ] 00:24:01.664 }' 00:24:01.664 05:21:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:01.664 05:21:20 -- common/autotest_common.sh@10 -- # set +x 00:24:01.922 05:21:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:02.181 [2024-07-26 05:21:21.203548] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:02.181 [2024-07-26 05:21:21.203598] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:24:02.181 05:21:21 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:24:02.181 05:21:21 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:02.484 05:21:21 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:02.743 BaseBdev1 00:24:02.743 05:21:21 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:24:02.743 05:21:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:02.743 05:21:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:02.743 05:21:21 -- common/autotest_common.sh@889 -- # local i 00:24:02.743 05:21:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:02.743 05:21:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:02.743 05:21:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:03.002 05:21:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:03.262 [ 00:24:03.262 { 00:24:03.262 "name": "BaseBdev1", 00:24:03.262 "aliases": [ 00:24:03.262 "53688c0d-5619-4d91-95af-0110bf1aca0b" 00:24:03.262 ], 00:24:03.262 "product_name": "Malloc disk", 00:24:03.262 "block_size": 512, 00:24:03.262 "num_blocks": 65536, 00:24:03.262 "uuid": "53688c0d-5619-4d91-95af-0110bf1aca0b", 00:24:03.262 "assigned_rate_limits": { 00:24:03.262 "rw_ios_per_sec": 0, 00:24:03.262 "rw_mbytes_per_sec": 0, 00:24:03.262 "r_mbytes_per_sec": 0, 00:24:03.262 "w_mbytes_per_sec": 0 00:24:03.262 }, 00:24:03.262 "claimed": false, 00:24:03.262 "zoned": false, 00:24:03.262 "supported_io_types": { 00:24:03.262 "read": true, 00:24:03.262 "write": true, 00:24:03.262 "unmap": true, 00:24:03.262 "write_zeroes": true, 00:24:03.262 "flush": true, 00:24:03.262 "reset": true, 00:24:03.262 "compare": false, 00:24:03.262 "compare_and_write": false, 00:24:03.262 "abort": true, 00:24:03.262 "nvme_admin": false, 00:24:03.262 "nvme_io": false 00:24:03.262 }, 00:24:03.262 "memory_domains": [ 00:24:03.262 { 00:24:03.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.262 "dma_device_type": 2 00:24:03.262 } 00:24:03.262 ], 00:24:03.262 "driver_specific": {} 00:24:03.262 } 00:24:03.262 ] 00:24:03.262 05:21:22 -- common/autotest_common.sh@895 -- # return 0 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:03.262 [2024-07-26 05:21:22.346066] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:03.262 [2024-07-26 05:21:22.348102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:03.262 [2024-07-26 05:21:22.348340] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:03.262 [2024-07-26 05:21:22.348382] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:03.262 [2024-07-26 05:21:22.348400] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:03.262 [2024-07-26 05:21:22.348410] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:03.262 [2024-07-26 05:21:22.348426] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.262 05:21:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:03.521 05:21:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:03.521 "name": "Existed_Raid", 00:24:03.521 "uuid": "2dfe539c-ef2a-4d63-b55e-17660a5c683c", 00:24:03.521 "strip_size_kb": 64, 00:24:03.521 "state": "configuring", 00:24:03.521 "raid_level": "raid5f", 00:24:03.521 "superblock": true, 00:24:03.521 "num_base_bdevs": 4, 00:24:03.521 "num_base_bdevs_discovered": 1, 00:24:03.521 "num_base_bdevs_operational": 4, 00:24:03.521 "base_bdevs_list": [ 00:24:03.521 { 00:24:03.521 "name": "BaseBdev1", 00:24:03.521 "uuid": "53688c0d-5619-4d91-95af-0110bf1aca0b", 00:24:03.521 "is_configured": true, 00:24:03.521 "data_offset": 2048, 00:24:03.521 "data_size": 63488 00:24:03.521 }, 00:24:03.521 { 00:24:03.521 "name": "BaseBdev2", 00:24:03.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.521 "is_configured": false, 00:24:03.521 "data_offset": 0, 00:24:03.521 "data_size": 0 00:24:03.521 }, 00:24:03.521 { 00:24:03.521 "name": "BaseBdev3", 00:24:03.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.521 "is_configured": false, 00:24:03.521 "data_offset": 0, 00:24:03.521 "data_size": 0 00:24:03.521 }, 00:24:03.521 { 00:24:03.521 "name": "BaseBdev4", 00:24:03.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.521 "is_configured": false, 00:24:03.521 "data_offset": 0, 00:24:03.521 "data_size": 0 00:24:03.521 } 00:24:03.521 ] 00:24:03.521 }' 00:24:03.521 05:21:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:03.521 05:21:22 -- common/autotest_common.sh@10 -- # set +x 00:24:04.089 05:21:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:04.089 [2024-07-26 05:21:23.111972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:04.089 BaseBdev2 00:24:04.089 05:21:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:04.089 05:21:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:04.089 05:21:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:04.089 05:21:23 -- common/autotest_common.sh@889 -- # local i 00:24:04.089 05:21:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:04.089 05:21:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:04.089 05:21:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:04.348 05:21:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:04.607 [ 00:24:04.607 { 00:24:04.607 "name": "BaseBdev2", 00:24:04.607 "aliases": [ 00:24:04.607 "abcb8d77-26b4-469e-997c-1280f7a0a43a" 00:24:04.607 ], 00:24:04.607 "product_name": "Malloc disk", 00:24:04.607 "block_size": 512, 00:24:04.607 "num_blocks": 65536, 00:24:04.607 "uuid": "abcb8d77-26b4-469e-997c-1280f7a0a43a", 00:24:04.607 "assigned_rate_limits": { 00:24:04.607 "rw_ios_per_sec": 0, 00:24:04.607 "rw_mbytes_per_sec": 0, 00:24:04.607 "r_mbytes_per_sec": 0, 00:24:04.607 "w_mbytes_per_sec": 0 00:24:04.607 }, 00:24:04.607 "claimed": true, 00:24:04.607 "claim_type": "exclusive_write", 00:24:04.607 "zoned": false, 00:24:04.607 "supported_io_types": { 00:24:04.607 "read": true, 00:24:04.607 "write": true, 00:24:04.607 "unmap": true, 00:24:04.607 "write_zeroes": true, 00:24:04.607 "flush": true, 00:24:04.607 "reset": true, 00:24:04.607 "compare": false, 00:24:04.607 "compare_and_write": false, 00:24:04.607 "abort": true, 00:24:04.607 "nvme_admin": false, 00:24:04.607 "nvme_io": false 00:24:04.607 }, 00:24:04.607 "memory_domains": [ 00:24:04.607 { 00:24:04.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:04.607 "dma_device_type": 2 00:24:04.607 } 00:24:04.607 ], 00:24:04.607 "driver_specific": {} 00:24:04.607 } 00:24:04.607 ] 00:24:04.607 05:21:23 -- common/autotest_common.sh@895 -- # return 0 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:04.607 05:21:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:04.608 05:21:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.866 05:21:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:04.866 "name": "Existed_Raid", 00:24:04.866 "uuid": "2dfe539c-ef2a-4d63-b55e-17660a5c683c", 00:24:04.866 "strip_size_kb": 64, 00:24:04.866 "state": "configuring", 00:24:04.866 "raid_level": "raid5f", 00:24:04.866 "superblock": true, 00:24:04.866 "num_base_bdevs": 4, 00:24:04.866 "num_base_bdevs_discovered": 2, 00:24:04.866 "num_base_bdevs_operational": 4, 00:24:04.866 "base_bdevs_list": [ 00:24:04.866 { 00:24:04.866 "name": "BaseBdev1", 00:24:04.866 "uuid": "53688c0d-5619-4d91-95af-0110bf1aca0b", 00:24:04.866 "is_configured": true, 00:24:04.866 "data_offset": 2048, 00:24:04.866 "data_size": 63488 00:24:04.866 }, 00:24:04.866 { 00:24:04.866 "name": "BaseBdev2", 00:24:04.866 "uuid": "abcb8d77-26b4-469e-997c-1280f7a0a43a", 00:24:04.866 "is_configured": true, 00:24:04.866 "data_offset": 2048, 00:24:04.866 "data_size": 63488 00:24:04.866 }, 00:24:04.866 { 00:24:04.866 "name": "BaseBdev3", 00:24:04.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.866 "is_configured": false, 00:24:04.866 "data_offset": 0, 00:24:04.866 "data_size": 0 00:24:04.866 }, 00:24:04.866 { 00:24:04.866 "name": "BaseBdev4", 00:24:04.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.866 "is_configured": false, 00:24:04.866 "data_offset": 0, 00:24:04.866 "data_size": 0 00:24:04.866 } 00:24:04.866 ] 00:24:04.866 }' 00:24:04.866 05:21:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:04.866 05:21:23 -- common/autotest_common.sh@10 -- # set +x 00:24:05.124 05:21:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:05.383 [2024-07-26 05:21:24.259536] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:05.383 BaseBdev3 00:24:05.383 05:21:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:05.383 05:21:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:05.383 05:21:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:05.383 05:21:24 -- common/autotest_common.sh@889 -- # local i 00:24:05.383 05:21:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:05.383 05:21:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:05.383 05:21:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:05.383 05:21:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:05.642 [ 00:24:05.642 { 00:24:05.642 "name": "BaseBdev3", 00:24:05.642 "aliases": [ 00:24:05.642 "ed900563-50ab-4901-b22d-59bd07fc6d0f" 00:24:05.642 ], 00:24:05.642 "product_name": "Malloc disk", 00:24:05.642 "block_size": 512, 00:24:05.642 "num_blocks": 65536, 00:24:05.642 "uuid": "ed900563-50ab-4901-b22d-59bd07fc6d0f", 00:24:05.642 "assigned_rate_limits": { 00:24:05.642 "rw_ios_per_sec": 0, 00:24:05.642 "rw_mbytes_per_sec": 0, 00:24:05.642 "r_mbytes_per_sec": 0, 00:24:05.642 "w_mbytes_per_sec": 0 00:24:05.642 }, 00:24:05.642 "claimed": true, 00:24:05.642 "claim_type": "exclusive_write", 00:24:05.642 "zoned": false, 00:24:05.642 "supported_io_types": { 00:24:05.642 "read": true, 00:24:05.642 "write": true, 00:24:05.642 "unmap": true, 00:24:05.642 "write_zeroes": true, 00:24:05.642 "flush": true, 00:24:05.642 "reset": true, 00:24:05.642 "compare": false, 00:24:05.642 "compare_and_write": false, 00:24:05.642 "abort": true, 00:24:05.642 "nvme_admin": false, 00:24:05.642 "nvme_io": false 00:24:05.642 }, 00:24:05.642 "memory_domains": [ 00:24:05.642 { 00:24:05.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.642 "dma_device_type": 2 00:24:05.642 } 00:24:05.642 ], 00:24:05.642 "driver_specific": {} 00:24:05.642 } 00:24:05.642 ] 00:24:05.642 05:21:24 -- common/autotest_common.sh@895 -- # return 0 00:24:05.642 05:21:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:05.642 05:21:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.643 05:21:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:05.901 05:21:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:05.901 "name": "Existed_Raid", 00:24:05.901 "uuid": "2dfe539c-ef2a-4d63-b55e-17660a5c683c", 00:24:05.901 "strip_size_kb": 64, 00:24:05.901 "state": "configuring", 00:24:05.901 "raid_level": "raid5f", 00:24:05.901 "superblock": true, 00:24:05.901 "num_base_bdevs": 4, 00:24:05.901 "num_base_bdevs_discovered": 3, 00:24:05.901 "num_base_bdevs_operational": 4, 00:24:05.901 "base_bdevs_list": [ 00:24:05.901 { 00:24:05.901 "name": "BaseBdev1", 00:24:05.901 "uuid": "53688c0d-5619-4d91-95af-0110bf1aca0b", 00:24:05.901 "is_configured": true, 00:24:05.901 "data_offset": 2048, 00:24:05.901 "data_size": 63488 00:24:05.901 }, 00:24:05.901 { 00:24:05.901 "name": "BaseBdev2", 00:24:05.901 "uuid": "abcb8d77-26b4-469e-997c-1280f7a0a43a", 00:24:05.901 "is_configured": true, 00:24:05.901 "data_offset": 2048, 00:24:05.901 "data_size": 63488 00:24:05.901 }, 00:24:05.901 { 00:24:05.901 "name": "BaseBdev3", 00:24:05.901 "uuid": "ed900563-50ab-4901-b22d-59bd07fc6d0f", 00:24:05.901 "is_configured": true, 00:24:05.901 "data_offset": 2048, 00:24:05.901 "data_size": 63488 00:24:05.901 }, 00:24:05.901 { 00:24:05.901 "name": "BaseBdev4", 00:24:05.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.901 "is_configured": false, 00:24:05.901 "data_offset": 0, 00:24:05.901 "data_size": 0 00:24:05.901 } 00:24:05.901 ] 00:24:05.901 }' 00:24:05.901 05:21:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:05.901 05:21:24 -- common/autotest_common.sh@10 -- # set +x 00:24:06.160 05:21:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:06.419 [2024-07-26 05:21:25.390596] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:06.419 [2024-07-26 05:21:25.390825] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:24:06.420 [2024-07-26 05:21:25.390842] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:06.420 [2024-07-26 05:21:25.390937] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:24:06.420 BaseBdev4 00:24:06.420 [2024-07-26 05:21:25.396730] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:24:06.420 [2024-07-26 05:21:25.396926] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:24:06.420 [2024-07-26 05:21:25.397241] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.420 05:21:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:06.420 05:21:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:24:06.420 05:21:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:06.420 05:21:25 -- common/autotest_common.sh@889 -- # local i 00:24:06.420 05:21:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:06.420 05:21:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:06.420 05:21:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:06.679 05:21:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:06.937 [ 00:24:06.937 { 00:24:06.937 "name": "BaseBdev4", 00:24:06.937 "aliases": [ 00:24:06.937 "7067a8d8-2dfb-4ede-bd46-f2ed7f03ab67" 00:24:06.937 ], 00:24:06.937 "product_name": "Malloc disk", 00:24:06.937 "block_size": 512, 00:24:06.937 "num_blocks": 65536, 00:24:06.937 "uuid": "7067a8d8-2dfb-4ede-bd46-f2ed7f03ab67", 00:24:06.937 "assigned_rate_limits": { 00:24:06.937 "rw_ios_per_sec": 0, 00:24:06.937 "rw_mbytes_per_sec": 0, 00:24:06.937 "r_mbytes_per_sec": 0, 00:24:06.937 "w_mbytes_per_sec": 0 00:24:06.937 }, 00:24:06.937 "claimed": true, 00:24:06.937 "claim_type": "exclusive_write", 00:24:06.937 "zoned": false, 00:24:06.937 "supported_io_types": { 00:24:06.937 "read": true, 00:24:06.937 "write": true, 00:24:06.937 "unmap": true, 00:24:06.937 "write_zeroes": true, 00:24:06.937 "flush": true, 00:24:06.937 "reset": true, 00:24:06.938 "compare": false, 00:24:06.938 "compare_and_write": false, 00:24:06.938 "abort": true, 00:24:06.938 "nvme_admin": false, 00:24:06.938 "nvme_io": false 00:24:06.938 }, 00:24:06.938 "memory_domains": [ 00:24:06.938 { 00:24:06.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.938 "dma_device_type": 2 00:24:06.938 } 00:24:06.938 ], 00:24:06.938 "driver_specific": {} 00:24:06.938 } 00:24:06.938 ] 00:24:06.938 05:21:25 -- common/autotest_common.sh@895 -- # return 0 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:06.938 05:21:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.196 05:21:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:07.196 "name": "Existed_Raid", 00:24:07.196 "uuid": "2dfe539c-ef2a-4d63-b55e-17660a5c683c", 00:24:07.196 "strip_size_kb": 64, 00:24:07.196 "state": "online", 00:24:07.196 "raid_level": "raid5f", 00:24:07.196 "superblock": true, 00:24:07.196 "num_base_bdevs": 4, 00:24:07.196 "num_base_bdevs_discovered": 4, 00:24:07.196 "num_base_bdevs_operational": 4, 00:24:07.196 "base_bdevs_list": [ 00:24:07.196 { 00:24:07.196 "name": "BaseBdev1", 00:24:07.196 "uuid": "53688c0d-5619-4d91-95af-0110bf1aca0b", 00:24:07.196 "is_configured": true, 00:24:07.196 "data_offset": 2048, 00:24:07.196 "data_size": 63488 00:24:07.196 }, 00:24:07.196 { 00:24:07.196 "name": "BaseBdev2", 00:24:07.196 "uuid": "abcb8d77-26b4-469e-997c-1280f7a0a43a", 00:24:07.196 "is_configured": true, 00:24:07.196 "data_offset": 2048, 00:24:07.196 "data_size": 63488 00:24:07.196 }, 00:24:07.196 { 00:24:07.196 "name": "BaseBdev3", 00:24:07.196 "uuid": "ed900563-50ab-4901-b22d-59bd07fc6d0f", 00:24:07.196 "is_configured": true, 00:24:07.196 "data_offset": 2048, 00:24:07.196 "data_size": 63488 00:24:07.196 }, 00:24:07.196 { 00:24:07.196 "name": "BaseBdev4", 00:24:07.196 "uuid": "7067a8d8-2dfb-4ede-bd46-f2ed7f03ab67", 00:24:07.196 "is_configured": true, 00:24:07.196 "data_offset": 2048, 00:24:07.196 "data_size": 63488 00:24:07.196 } 00:24:07.196 ] 00:24:07.196 }' 00:24:07.196 05:21:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:07.196 05:21:26 -- common/autotest_common.sh@10 -- # set +x 00:24:07.454 05:21:26 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:07.454 [2024-07-26 05:21:26.547633] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.713 05:21:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.972 05:21:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:07.972 "name": "Existed_Raid", 00:24:07.972 "uuid": "2dfe539c-ef2a-4d63-b55e-17660a5c683c", 00:24:07.972 "strip_size_kb": 64, 00:24:07.972 "state": "online", 00:24:07.972 "raid_level": "raid5f", 00:24:07.972 "superblock": true, 00:24:07.972 "num_base_bdevs": 4, 00:24:07.972 "num_base_bdevs_discovered": 3, 00:24:07.972 "num_base_bdevs_operational": 3, 00:24:07.972 "base_bdevs_list": [ 00:24:07.972 { 00:24:07.972 "name": null, 00:24:07.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.972 "is_configured": false, 00:24:07.972 "data_offset": 2048, 00:24:07.972 "data_size": 63488 00:24:07.972 }, 00:24:07.972 { 00:24:07.972 "name": "BaseBdev2", 00:24:07.972 "uuid": "abcb8d77-26b4-469e-997c-1280f7a0a43a", 00:24:07.972 "is_configured": true, 00:24:07.972 "data_offset": 2048, 00:24:07.972 "data_size": 63488 00:24:07.972 }, 00:24:07.972 { 00:24:07.972 "name": "BaseBdev3", 00:24:07.972 "uuid": "ed900563-50ab-4901-b22d-59bd07fc6d0f", 00:24:07.972 "is_configured": true, 00:24:07.972 "data_offset": 2048, 00:24:07.972 "data_size": 63488 00:24:07.972 }, 00:24:07.972 { 00:24:07.972 "name": "BaseBdev4", 00:24:07.972 "uuid": "7067a8d8-2dfb-4ede-bd46-f2ed7f03ab67", 00:24:07.972 "is_configured": true, 00:24:07.972 "data_offset": 2048, 00:24:07.972 "data_size": 63488 00:24:07.972 } 00:24:07.972 ] 00:24:07.972 }' 00:24:07.972 05:21:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:07.972 05:21:26 -- common/autotest_common.sh@10 -- # set +x 00:24:08.230 05:21:27 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:08.230 05:21:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:08.230 05:21:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:08.230 05:21:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.488 05:21:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:08.488 05:21:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:08.488 05:21:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:08.488 [2024-07-26 05:21:27.555414] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:08.488 [2024-07-26 05:21:27.555734] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:08.488 [2024-07-26 05:21:27.555989] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:08.749 05:21:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:08.749 05:21:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:08.749 05:21:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.749 05:21:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:09.008 05:21:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:09.008 05:21:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:09.008 05:21:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:09.008 [2024-07-26 05:21:28.099261] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:09.266 05:21:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:09.266 05:21:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:09.266 05:21:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:09.266 05:21:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.266 05:21:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:09.266 05:21:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:09.266 05:21:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:09.525 [2024-07-26 05:21:28.598344] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:09.525 [2024-07-26 05:21:28.598705] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:24:09.782 05:21:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:09.782 05:21:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:09.782 05:21:28 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.782 05:21:28 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:10.041 05:21:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:10.041 05:21:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:10.041 05:21:28 -- bdev/bdev_raid.sh@287 -- # killprocess 84897 00:24:10.041 05:21:28 -- common/autotest_common.sh@926 -- # '[' -z 84897 ']' 00:24:10.041 05:21:28 -- common/autotest_common.sh@930 -- # kill -0 84897 00:24:10.041 05:21:28 -- common/autotest_common.sh@931 -- # uname 00:24:10.041 05:21:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:10.041 05:21:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84897 00:24:10.041 killing process with pid 84897 00:24:10.041 05:21:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:10.041 05:21:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:10.041 05:21:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84897' 00:24:10.041 05:21:28 -- common/autotest_common.sh@945 -- # kill 84897 00:24:10.041 [2024-07-26 05:21:28.949503] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:10.041 05:21:28 -- common/autotest_common.sh@950 -- # wait 84897 00:24:10.041 [2024-07-26 05:21:28.949616] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:10.976 ************************************ 00:24:10.976 END TEST raid5f_state_function_test_sb 00:24:10.976 ************************************ 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:10.976 00:24:10.976 real 0m12.222s 00:24:10.976 user 0m20.601s 00:24:10.976 sys 0m1.779s 00:24:10.976 05:21:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.976 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:24:10.976 05:21:29 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:24:10.976 05:21:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:10.976 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:10.976 ************************************ 00:24:10.976 START TEST raid5f_superblock_test 00:24:10.976 ************************************ 00:24:10.976 05:21:29 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@357 -- # raid_pid=85294 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@358 -- # waitforlisten 85294 /var/tmp/spdk-raid.sock 00:24:10.976 05:21:29 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:10.976 05:21:29 -- common/autotest_common.sh@819 -- # '[' -z 85294 ']' 00:24:10.976 05:21:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:10.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:10.976 05:21:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:10.976 05:21:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:10.976 05:21:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:10.976 05:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:10.976 [2024-07-26 05:21:30.034225] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:10.976 [2024-07-26 05:21:30.034395] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85294 ] 00:24:11.235 [2024-07-26 05:21:30.203594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.494 [2024-07-26 05:21:30.350412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.494 [2024-07-26 05:21:30.492355] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:11.752 05:21:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:11.752 05:21:30 -- common/autotest_common.sh@852 -- # return 0 00:24:11.752 05:21:30 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:24:11.752 05:21:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:11.752 05:21:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:24:11.752 05:21:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:24:11.752 05:21:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:11.752 05:21:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:11.752 05:21:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:11.752 05:21:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:11.752 05:21:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:12.010 malloc1 00:24:12.269 05:21:31 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:12.269 [2024-07-26 05:21:31.334652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:12.269 [2024-07-26 05:21:31.334726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.269 [2024-07-26 05:21:31.334789] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:24:12.269 [2024-07-26 05:21:31.334822] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.269 [2024-07-26 05:21:31.337185] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.269 [2024-07-26 05:21:31.337388] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:12.269 pt1 00:24:12.269 05:21:31 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:12.269 05:21:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:12.269 05:21:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:24:12.269 05:21:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:24:12.269 05:21:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:12.269 05:21:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:12.269 05:21:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:12.269 05:21:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:12.269 05:21:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:12.527 malloc2 00:24:12.527 05:21:31 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:12.786 [2024-07-26 05:21:31.794338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:12.786 [2024-07-26 05:21:31.794567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.786 [2024-07-26 05:21:31.794608] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:24:12.786 [2024-07-26 05:21:31.794623] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.786 [2024-07-26 05:21:31.796849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.786 [2024-07-26 05:21:31.796889] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:12.786 pt2 00:24:12.786 05:21:31 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:12.786 05:21:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:12.786 05:21:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:24:12.786 05:21:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:24:12.786 05:21:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:12.786 05:21:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:12.786 05:21:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:12.786 05:21:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:12.786 05:21:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:13.044 malloc3 00:24:13.044 05:21:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:13.304 [2024-07-26 05:21:32.186896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:13.304 [2024-07-26 05:21:32.186997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.304 [2024-07-26 05:21:32.187059] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:24:13.304 [2024-07-26 05:21:32.187077] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.304 [2024-07-26 05:21:32.189405] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.304 [2024-07-26 05:21:32.189445] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:13.304 pt3 00:24:13.304 05:21:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:13.304 05:21:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:13.304 05:21:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:24:13.304 05:21:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:24:13.304 05:21:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:13.304 05:21:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:13.304 05:21:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:13.304 05:21:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:13.304 05:21:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:13.563 malloc4 00:24:13.563 05:21:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:13.563 [2024-07-26 05:21:32.631424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:13.563 [2024-07-26 05:21:32.631631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.563 [2024-07-26 05:21:32.631677] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:24:13.563 [2024-07-26 05:21:32.631693] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.563 [2024-07-26 05:21:32.633881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.563 [2024-07-26 05:21:32.633919] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:13.563 pt4 00:24:13.563 05:21:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:13.563 05:21:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:13.563 05:21:32 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:13.822 [2024-07-26 05:21:32.863525] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:13.822 [2024-07-26 05:21:32.865348] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:13.822 [2024-07-26 05:21:32.865444] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:13.822 [2024-07-26 05:21:32.865506] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:13.822 [2024-07-26 05:21:32.865709] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:24:13.822 [2024-07-26 05:21:32.865725] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:13.822 [2024-07-26 05:21:32.865819] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:24:13.822 [2024-07-26 05:21:32.871437] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:24:13.822 [2024-07-26 05:21:32.871466] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:24:13.822 [2024-07-26 05:21:32.871635] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.822 05:21:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.081 05:21:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:14.081 "name": "raid_bdev1", 00:24:14.081 "uuid": "585302a4-f968-462e-9c96-744a54cbf1d9", 00:24:14.081 "strip_size_kb": 64, 00:24:14.081 "state": "online", 00:24:14.081 "raid_level": "raid5f", 00:24:14.081 "superblock": true, 00:24:14.081 "num_base_bdevs": 4, 00:24:14.081 "num_base_bdevs_discovered": 4, 00:24:14.081 "num_base_bdevs_operational": 4, 00:24:14.081 "base_bdevs_list": [ 00:24:14.081 { 00:24:14.081 "name": "pt1", 00:24:14.081 "uuid": "82449e97-48ec-56a4-ba73-24b3bce689ae", 00:24:14.081 "is_configured": true, 00:24:14.081 "data_offset": 2048, 00:24:14.081 "data_size": 63488 00:24:14.081 }, 00:24:14.081 { 00:24:14.081 "name": "pt2", 00:24:14.081 "uuid": "5b77f0c3-190e-5c34-93b5-40c282ff96b7", 00:24:14.081 "is_configured": true, 00:24:14.081 "data_offset": 2048, 00:24:14.081 "data_size": 63488 00:24:14.081 }, 00:24:14.081 { 00:24:14.081 "name": "pt3", 00:24:14.081 "uuid": "7c63dfa7-1f74-572d-bcf6-c419571f2e66", 00:24:14.081 "is_configured": true, 00:24:14.081 "data_offset": 2048, 00:24:14.081 "data_size": 63488 00:24:14.081 }, 00:24:14.081 { 00:24:14.081 "name": "pt4", 00:24:14.081 "uuid": "68014fd6-7ce4-51cd-98af-72e8ecaa5c2d", 00:24:14.081 "is_configured": true, 00:24:14.081 "data_offset": 2048, 00:24:14.081 "data_size": 63488 00:24:14.081 } 00:24:14.081 ] 00:24:14.081 }' 00:24:14.081 05:21:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:14.081 05:21:33 -- common/autotest_common.sh@10 -- # set +x 00:24:14.340 05:21:33 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:24:14.341 05:21:33 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:14.602 [2024-07-26 05:21:33.497439] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:14.602 05:21:33 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=585302a4-f968-462e-9c96-744a54cbf1d9 00:24:14.602 05:21:33 -- bdev/bdev_raid.sh@380 -- # '[' -z 585302a4-f968-462e-9c96-744a54cbf1d9 ']' 00:24:14.602 05:21:33 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:14.602 [2024-07-26 05:21:33.689313] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:14.602 [2024-07-26 05:21:33.689466] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:14.602 [2024-07-26 05:21:33.689554] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:14.602 [2024-07-26 05:21:33.689649] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:14.602 [2024-07-26 05:21:33.689663] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:24:14.602 05:21:33 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.602 05:21:33 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:24:14.860 05:21:33 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:24:14.860 05:21:33 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:24:14.860 05:21:33 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:14.860 05:21:33 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:15.118 05:21:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:15.118 05:21:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:15.377 05:21:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:15.377 05:21:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:15.636 05:21:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:15.636 05:21:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:15.636 05:21:34 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:15.636 05:21:34 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:15.894 05:21:34 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:24:15.894 05:21:34 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:15.894 05:21:34 -- common/autotest_common.sh@640 -- # local es=0 00:24:15.894 05:21:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:15.894 05:21:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:15.894 05:21:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:15.894 05:21:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:15.894 05:21:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:15.894 05:21:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:15.894 05:21:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:15.894 05:21:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:15.894 05:21:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:15.894 05:21:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:16.153 [2024-07-26 05:21:35.161676] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:16.153 [2024-07-26 05:21:35.163532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:16.153 [2024-07-26 05:21:35.163591] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:16.153 [2024-07-26 05:21:35.163631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:16.153 [2024-07-26 05:21:35.163688] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:24:16.153 [2024-07-26 05:21:35.163744] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:24:16.153 [2024-07-26 05:21:35.163773] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:24:16.153 [2024-07-26 05:21:35.163798] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:24:16.153 [2024-07-26 05:21:35.163818] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:16.153 [2024-07-26 05:21:35.163829] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:24:16.153 request: 00:24:16.153 { 00:24:16.153 "name": "raid_bdev1", 00:24:16.153 "raid_level": "raid5f", 00:24:16.153 "base_bdevs": [ 00:24:16.153 "malloc1", 00:24:16.153 "malloc2", 00:24:16.153 "malloc3", 00:24:16.153 "malloc4" 00:24:16.153 ], 00:24:16.153 "superblock": false, 00:24:16.153 "strip_size_kb": 64, 00:24:16.153 "method": "bdev_raid_create", 00:24:16.153 "req_id": 1 00:24:16.153 } 00:24:16.153 Got JSON-RPC error response 00:24:16.153 response: 00:24:16.153 { 00:24:16.153 "code": -17, 00:24:16.153 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:16.153 } 00:24:16.153 05:21:35 -- common/autotest_common.sh@643 -- # es=1 00:24:16.153 05:21:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:16.153 05:21:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:16.153 05:21:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:16.153 05:21:35 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.153 05:21:35 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:24:16.431 05:21:35 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:24:16.431 05:21:35 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:24:16.431 05:21:35 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:16.702 [2024-07-26 05:21:35.577711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:16.702 [2024-07-26 05:21:35.577944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:16.702 [2024-07-26 05:21:35.577984] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:24:16.702 [2024-07-26 05:21:35.577998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:16.702 [2024-07-26 05:21:35.580283] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:16.702 [2024-07-26 05:21:35.580331] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:16.702 [2024-07-26 05:21:35.580422] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:16.702 [2024-07-26 05:21:35.580476] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:16.702 pt1 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.702 05:21:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:16.702 "name": "raid_bdev1", 00:24:16.702 "uuid": "585302a4-f968-462e-9c96-744a54cbf1d9", 00:24:16.702 "strip_size_kb": 64, 00:24:16.702 "state": "configuring", 00:24:16.702 "raid_level": "raid5f", 00:24:16.702 "superblock": true, 00:24:16.702 "num_base_bdevs": 4, 00:24:16.702 "num_base_bdevs_discovered": 1, 00:24:16.702 "num_base_bdevs_operational": 4, 00:24:16.702 "base_bdevs_list": [ 00:24:16.702 { 00:24:16.702 "name": "pt1", 00:24:16.702 "uuid": "82449e97-48ec-56a4-ba73-24b3bce689ae", 00:24:16.702 "is_configured": true, 00:24:16.702 "data_offset": 2048, 00:24:16.702 "data_size": 63488 00:24:16.702 }, 00:24:16.702 { 00:24:16.702 "name": null, 00:24:16.702 "uuid": "5b77f0c3-190e-5c34-93b5-40c282ff96b7", 00:24:16.702 "is_configured": false, 00:24:16.702 "data_offset": 2048, 00:24:16.702 "data_size": 63488 00:24:16.703 }, 00:24:16.703 { 00:24:16.703 "name": null, 00:24:16.703 "uuid": "7c63dfa7-1f74-572d-bcf6-c419571f2e66", 00:24:16.703 "is_configured": false, 00:24:16.703 "data_offset": 2048, 00:24:16.703 "data_size": 63488 00:24:16.703 }, 00:24:16.703 { 00:24:16.703 "name": null, 00:24:16.703 "uuid": "68014fd6-7ce4-51cd-98af-72e8ecaa5c2d", 00:24:16.703 "is_configured": false, 00:24:16.703 "data_offset": 2048, 00:24:16.703 "data_size": 63488 00:24:16.703 } 00:24:16.703 ] 00:24:16.703 }' 00:24:16.703 05:21:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:16.703 05:21:35 -- common/autotest_common.sh@10 -- # set +x 00:24:16.961 05:21:36 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:24:16.961 05:21:36 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:17.221 [2024-07-26 05:21:36.277864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:17.221 [2024-07-26 05:21:36.278091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:17.221 [2024-07-26 05:21:36.278136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:24:17.221 [2024-07-26 05:21:36.278151] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:17.221 [2024-07-26 05:21:36.278621] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:17.221 [2024-07-26 05:21:36.278642] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:17.221 [2024-07-26 05:21:36.278728] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:17.221 [2024-07-26 05:21:36.278751] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:17.221 pt2 00:24:17.221 05:21:36 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:17.479 [2024-07-26 05:21:36.469931] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:17.479 05:21:36 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:17.479 05:21:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:17.479 05:21:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:17.479 05:21:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:17.479 05:21:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:17.479 05:21:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:17.479 05:21:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:17.479 05:21:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:17.479 05:21:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:17.479 05:21:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:17.479 05:21:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.480 05:21:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.739 05:21:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:17.739 "name": "raid_bdev1", 00:24:17.739 "uuid": "585302a4-f968-462e-9c96-744a54cbf1d9", 00:24:17.739 "strip_size_kb": 64, 00:24:17.739 "state": "configuring", 00:24:17.739 "raid_level": "raid5f", 00:24:17.739 "superblock": true, 00:24:17.739 "num_base_bdevs": 4, 00:24:17.739 "num_base_bdevs_discovered": 1, 00:24:17.739 "num_base_bdevs_operational": 4, 00:24:17.739 "base_bdevs_list": [ 00:24:17.739 { 00:24:17.739 "name": "pt1", 00:24:17.739 "uuid": "82449e97-48ec-56a4-ba73-24b3bce689ae", 00:24:17.739 "is_configured": true, 00:24:17.739 "data_offset": 2048, 00:24:17.739 "data_size": 63488 00:24:17.739 }, 00:24:17.739 { 00:24:17.739 "name": null, 00:24:17.739 "uuid": "5b77f0c3-190e-5c34-93b5-40c282ff96b7", 00:24:17.739 "is_configured": false, 00:24:17.739 "data_offset": 2048, 00:24:17.739 "data_size": 63488 00:24:17.739 }, 00:24:17.739 { 00:24:17.739 "name": null, 00:24:17.739 "uuid": "7c63dfa7-1f74-572d-bcf6-c419571f2e66", 00:24:17.739 "is_configured": false, 00:24:17.739 "data_offset": 2048, 00:24:17.739 "data_size": 63488 00:24:17.739 }, 00:24:17.739 { 00:24:17.739 "name": null, 00:24:17.739 "uuid": "68014fd6-7ce4-51cd-98af-72e8ecaa5c2d", 00:24:17.739 "is_configured": false, 00:24:17.739 "data_offset": 2048, 00:24:17.739 "data_size": 63488 00:24:17.739 } 00:24:17.739 ] 00:24:17.739 }' 00:24:17.739 05:21:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:17.739 05:21:36 -- common/autotest_common.sh@10 -- # set +x 00:24:17.998 05:21:37 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:24:17.998 05:21:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:17.998 05:21:37 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:18.257 [2024-07-26 05:21:37.182091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:18.257 [2024-07-26 05:21:37.182160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:18.257 [2024-07-26 05:21:37.182185] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:24:18.257 [2024-07-26 05:21:37.182198] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:18.257 [2024-07-26 05:21:37.182604] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:18.257 [2024-07-26 05:21:37.182629] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:18.257 [2024-07-26 05:21:37.182709] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:18.257 [2024-07-26 05:21:37.182738] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:18.257 pt2 00:24:18.257 05:21:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:18.257 05:21:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:18.257 05:21:37 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:18.517 [2024-07-26 05:21:37.438181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:18.517 [2024-07-26 05:21:37.438239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:18.517 [2024-07-26 05:21:37.438261] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:24:18.517 [2024-07-26 05:21:37.438274] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:18.517 [2024-07-26 05:21:37.438622] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:18.517 [2024-07-26 05:21:37.438647] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:18.517 [2024-07-26 05:21:37.438718] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:18.517 [2024-07-26 05:21:37.438750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:18.517 pt3 00:24:18.517 05:21:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:18.517 05:21:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:18.517 05:21:37 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:18.517 [2024-07-26 05:21:37.618207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:18.517 [2024-07-26 05:21:37.618424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:18.517 [2024-07-26 05:21:37.618465] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:24:18.517 [2024-07-26 05:21:37.618484] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:18.517 [2024-07-26 05:21:37.618951] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:18.517 [2024-07-26 05:21:37.618978] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:18.517 [2024-07-26 05:21:37.619090] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:18.517 [2024-07-26 05:21:37.619124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:18.517 [2024-07-26 05:21:37.619272] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:24:18.517 [2024-07-26 05:21:37.619305] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:18.517 [2024-07-26 05:21:37.619420] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:24:18.776 pt4 00:24:18.776 [2024-07-26 05:21:37.625109] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:24:18.776 [2024-07-26 05:21:37.625132] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:24:18.776 [2024-07-26 05:21:37.625339] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:18.776 "name": "raid_bdev1", 00:24:18.776 "uuid": "585302a4-f968-462e-9c96-744a54cbf1d9", 00:24:18.776 "strip_size_kb": 64, 00:24:18.776 "state": "online", 00:24:18.776 "raid_level": "raid5f", 00:24:18.776 "superblock": true, 00:24:18.776 "num_base_bdevs": 4, 00:24:18.776 "num_base_bdevs_discovered": 4, 00:24:18.776 "num_base_bdevs_operational": 4, 00:24:18.776 "base_bdevs_list": [ 00:24:18.776 { 00:24:18.776 "name": "pt1", 00:24:18.776 "uuid": "82449e97-48ec-56a4-ba73-24b3bce689ae", 00:24:18.776 "is_configured": true, 00:24:18.776 "data_offset": 2048, 00:24:18.776 "data_size": 63488 00:24:18.776 }, 00:24:18.776 { 00:24:18.776 "name": "pt2", 00:24:18.776 "uuid": "5b77f0c3-190e-5c34-93b5-40c282ff96b7", 00:24:18.776 "is_configured": true, 00:24:18.776 "data_offset": 2048, 00:24:18.776 "data_size": 63488 00:24:18.776 }, 00:24:18.776 { 00:24:18.776 "name": "pt3", 00:24:18.776 "uuid": "7c63dfa7-1f74-572d-bcf6-c419571f2e66", 00:24:18.776 "is_configured": true, 00:24:18.776 "data_offset": 2048, 00:24:18.776 "data_size": 63488 00:24:18.776 }, 00:24:18.776 { 00:24:18.776 "name": "pt4", 00:24:18.776 "uuid": "68014fd6-7ce4-51cd-98af-72e8ecaa5c2d", 00:24:18.776 "is_configured": true, 00:24:18.776 "data_offset": 2048, 00:24:18.776 "data_size": 63488 00:24:18.776 } 00:24:18.776 ] 00:24:18.776 }' 00:24:18.776 05:21:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:18.776 05:21:37 -- common/autotest_common.sh@10 -- # set +x 00:24:19.035 05:21:38 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:19.035 05:21:38 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:24:19.294 [2024-07-26 05:21:38.363912] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:19.294 05:21:38 -- bdev/bdev_raid.sh@430 -- # '[' 585302a4-f968-462e-9c96-744a54cbf1d9 '!=' 585302a4-f968-462e-9c96-744a54cbf1d9 ']' 00:24:19.294 05:21:38 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:24:19.294 05:21:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:19.294 05:21:38 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:19.294 05:21:38 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:19.553 [2024-07-26 05:21:38.547830] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.553 05:21:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.812 05:21:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:19.812 "name": "raid_bdev1", 00:24:19.812 "uuid": "585302a4-f968-462e-9c96-744a54cbf1d9", 00:24:19.812 "strip_size_kb": 64, 00:24:19.812 "state": "online", 00:24:19.812 "raid_level": "raid5f", 00:24:19.812 "superblock": true, 00:24:19.812 "num_base_bdevs": 4, 00:24:19.812 "num_base_bdevs_discovered": 3, 00:24:19.812 "num_base_bdevs_operational": 3, 00:24:19.812 "base_bdevs_list": [ 00:24:19.812 { 00:24:19.812 "name": null, 00:24:19.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.812 "is_configured": false, 00:24:19.812 "data_offset": 2048, 00:24:19.812 "data_size": 63488 00:24:19.812 }, 00:24:19.812 { 00:24:19.812 "name": "pt2", 00:24:19.812 "uuid": "5b77f0c3-190e-5c34-93b5-40c282ff96b7", 00:24:19.812 "is_configured": true, 00:24:19.812 "data_offset": 2048, 00:24:19.812 "data_size": 63488 00:24:19.812 }, 00:24:19.812 { 00:24:19.812 "name": "pt3", 00:24:19.812 "uuid": "7c63dfa7-1f74-572d-bcf6-c419571f2e66", 00:24:19.812 "is_configured": true, 00:24:19.812 "data_offset": 2048, 00:24:19.812 "data_size": 63488 00:24:19.812 }, 00:24:19.812 { 00:24:19.812 "name": "pt4", 00:24:19.812 "uuid": "68014fd6-7ce4-51cd-98af-72e8ecaa5c2d", 00:24:19.812 "is_configured": true, 00:24:19.812 "data_offset": 2048, 00:24:19.812 "data_size": 63488 00:24:19.812 } 00:24:19.812 ] 00:24:19.812 }' 00:24:19.812 05:21:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:19.812 05:21:38 -- common/autotest_common.sh@10 -- # set +x 00:24:20.071 05:21:39 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:20.071 [2024-07-26 05:21:39.179981] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:20.343 [2024-07-26 05:21:39.180251] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:20.343 [2024-07-26 05:21:39.180354] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:20.343 [2024-07-26 05:21:39.180460] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:20.343 [2024-07-26 05:21:39.180476] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:24:20.343 05:21:39 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.343 05:21:39 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:24:20.343 05:21:39 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:24:20.343 05:21:39 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:24:20.343 05:21:39 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:24:20.343 05:21:39 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:20.343 05:21:39 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:20.605 05:21:39 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:20.605 05:21:39 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:20.605 05:21:39 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:20.863 05:21:39 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:20.863 05:21:39 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:20.863 05:21:39 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:21.123 05:21:39 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:21.123 05:21:39 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:21.123 05:21:39 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:24:21.123 05:21:39 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:21.123 05:21:39 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:21.123 [2024-07-26 05:21:40.148131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:21.123 [2024-07-26 05:21:40.148216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.123 [2024-07-26 05:21:40.148252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:24:21.123 [2024-07-26 05:21:40.148264] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.123 [2024-07-26 05:21:40.150619] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.123 [2024-07-26 05:21:40.150799] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:21.123 [2024-07-26 05:21:40.151063] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:21.123 [2024-07-26 05:21:40.151234] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:21.123 pt2 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.123 05:21:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.381 05:21:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:21.381 "name": "raid_bdev1", 00:24:21.381 "uuid": "585302a4-f968-462e-9c96-744a54cbf1d9", 00:24:21.381 "strip_size_kb": 64, 00:24:21.381 "state": "configuring", 00:24:21.381 "raid_level": "raid5f", 00:24:21.381 "superblock": true, 00:24:21.381 "num_base_bdevs": 4, 00:24:21.381 "num_base_bdevs_discovered": 1, 00:24:21.381 "num_base_bdevs_operational": 3, 00:24:21.381 "base_bdevs_list": [ 00:24:21.381 { 00:24:21.381 "name": null, 00:24:21.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.381 "is_configured": false, 00:24:21.381 "data_offset": 2048, 00:24:21.381 "data_size": 63488 00:24:21.381 }, 00:24:21.381 { 00:24:21.381 "name": "pt2", 00:24:21.381 "uuid": "5b77f0c3-190e-5c34-93b5-40c282ff96b7", 00:24:21.381 "is_configured": true, 00:24:21.381 "data_offset": 2048, 00:24:21.381 "data_size": 63488 00:24:21.381 }, 00:24:21.381 { 00:24:21.381 "name": null, 00:24:21.381 "uuid": "7c63dfa7-1f74-572d-bcf6-c419571f2e66", 00:24:21.382 "is_configured": false, 00:24:21.382 "data_offset": 2048, 00:24:21.382 "data_size": 63488 00:24:21.382 }, 00:24:21.382 { 00:24:21.382 "name": null, 00:24:21.382 "uuid": "68014fd6-7ce4-51cd-98af-72e8ecaa5c2d", 00:24:21.382 "is_configured": false, 00:24:21.382 "data_offset": 2048, 00:24:21.382 "data_size": 63488 00:24:21.382 } 00:24:21.382 ] 00:24:21.382 }' 00:24:21.382 05:21:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:21.382 05:21:40 -- common/autotest_common.sh@10 -- # set +x 00:24:21.640 05:21:40 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:21.640 05:21:40 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:21.640 05:21:40 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:21.898 [2024-07-26 05:21:40.952386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:21.898 [2024-07-26 05:21:40.952482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.898 [2024-07-26 05:21:40.952513] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:24:21.898 [2024-07-26 05:21:40.952525] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.898 [2024-07-26 05:21:40.952947] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.898 [2024-07-26 05:21:40.952969] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:21.898 [2024-07-26 05:21:40.953097] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:21.898 [2024-07-26 05:21:40.953148] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:21.898 pt3 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.898 05:21:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.157 05:21:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:22.157 "name": "raid_bdev1", 00:24:22.157 "uuid": "585302a4-f968-462e-9c96-744a54cbf1d9", 00:24:22.157 "strip_size_kb": 64, 00:24:22.157 "state": "configuring", 00:24:22.157 "raid_level": "raid5f", 00:24:22.157 "superblock": true, 00:24:22.157 "num_base_bdevs": 4, 00:24:22.157 "num_base_bdevs_discovered": 2, 00:24:22.157 "num_base_bdevs_operational": 3, 00:24:22.157 "base_bdevs_list": [ 00:24:22.157 { 00:24:22.157 "name": null, 00:24:22.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.157 "is_configured": false, 00:24:22.157 "data_offset": 2048, 00:24:22.157 "data_size": 63488 00:24:22.157 }, 00:24:22.157 { 00:24:22.157 "name": "pt2", 00:24:22.157 "uuid": "5b77f0c3-190e-5c34-93b5-40c282ff96b7", 00:24:22.157 "is_configured": true, 00:24:22.157 "data_offset": 2048, 00:24:22.157 "data_size": 63488 00:24:22.157 }, 00:24:22.157 { 00:24:22.157 "name": "pt3", 00:24:22.157 "uuid": "7c63dfa7-1f74-572d-bcf6-c419571f2e66", 00:24:22.157 "is_configured": true, 00:24:22.157 "data_offset": 2048, 00:24:22.157 "data_size": 63488 00:24:22.157 }, 00:24:22.157 { 00:24:22.157 "name": null, 00:24:22.157 "uuid": "68014fd6-7ce4-51cd-98af-72e8ecaa5c2d", 00:24:22.157 "is_configured": false, 00:24:22.157 "data_offset": 2048, 00:24:22.157 "data_size": 63488 00:24:22.157 } 00:24:22.157 ] 00:24:22.157 }' 00:24:22.157 05:21:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:22.157 05:21:41 -- common/autotest_common.sh@10 -- # set +x 00:24:22.415 05:21:41 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:22.416 05:21:41 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:22.416 05:21:41 -- bdev/bdev_raid.sh@462 -- # i=3 00:24:22.416 05:21:41 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:22.674 [2024-07-26 05:21:41.568496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:22.674 [2024-07-26 05:21:41.568567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.674 [2024-07-26 05:21:41.568763] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:24:22.674 [2024-07-26 05:21:41.568786] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.674 [2024-07-26 05:21:41.569269] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.674 [2024-07-26 05:21:41.569292] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:22.674 [2024-07-26 05:21:41.569443] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:22.674 [2024-07-26 05:21:41.569494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:22.674 [2024-07-26 05:21:41.569634] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ba80 00:24:22.674 [2024-07-26 05:21:41.569648] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:22.674 [2024-07-26 05:21:41.569740] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:24:22.674 pt4 00:24:22.674 [2024-07-26 05:21:41.575062] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ba80 00:24:22.674 [2024-07-26 05:21:41.575091] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ba80 00:24:22.674 [2024-07-26 05:21:41.575391] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:22.674 05:21:41 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:22.674 05:21:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:22.674 05:21:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:22.674 05:21:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:22.674 05:21:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:22.674 05:21:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:22.674 05:21:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:22.674 05:21:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:22.674 05:21:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:22.675 05:21:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:22.675 05:21:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.675 05:21:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.933 05:21:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:22.933 "name": "raid_bdev1", 00:24:22.933 "uuid": "585302a4-f968-462e-9c96-744a54cbf1d9", 00:24:22.933 "strip_size_kb": 64, 00:24:22.933 "state": "online", 00:24:22.933 "raid_level": "raid5f", 00:24:22.933 "superblock": true, 00:24:22.933 "num_base_bdevs": 4, 00:24:22.933 "num_base_bdevs_discovered": 3, 00:24:22.933 "num_base_bdevs_operational": 3, 00:24:22.933 "base_bdevs_list": [ 00:24:22.933 { 00:24:22.933 "name": null, 00:24:22.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.933 "is_configured": false, 00:24:22.933 "data_offset": 2048, 00:24:22.933 "data_size": 63488 00:24:22.933 }, 00:24:22.933 { 00:24:22.933 "name": "pt2", 00:24:22.933 "uuid": "5b77f0c3-190e-5c34-93b5-40c282ff96b7", 00:24:22.933 "is_configured": true, 00:24:22.933 "data_offset": 2048, 00:24:22.933 "data_size": 63488 00:24:22.933 }, 00:24:22.933 { 00:24:22.933 "name": "pt3", 00:24:22.933 "uuid": "7c63dfa7-1f74-572d-bcf6-c419571f2e66", 00:24:22.933 "is_configured": true, 00:24:22.933 "data_offset": 2048, 00:24:22.933 "data_size": 63488 00:24:22.933 }, 00:24:22.933 { 00:24:22.933 "name": "pt4", 00:24:22.933 "uuid": "68014fd6-7ce4-51cd-98af-72e8ecaa5c2d", 00:24:22.933 "is_configured": true, 00:24:22.933 "data_offset": 2048, 00:24:22.933 "data_size": 63488 00:24:22.933 } 00:24:22.933 ] 00:24:22.933 }' 00:24:22.933 05:21:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:22.933 05:21:41 -- common/autotest_common.sh@10 -- # set +x 00:24:23.191 05:21:42 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:24:23.192 05:21:42 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:23.450 [2024-07-26 05:21:42.337668] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:23.450 [2024-07-26 05:21:42.337698] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:23.450 [2024-07-26 05:21:42.337773] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:23.450 [2024-07-26 05:21:42.337844] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:23.450 [2024-07-26 05:21:42.337863] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state offline 00:24:23.450 05:21:42 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.450 05:21:42 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:24:23.708 05:21:42 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:24:23.708 05:21:42 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:24:23.708 05:21:42 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:23.708 [2024-07-26 05:21:42.817749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:23.708 [2024-07-26 05:21:42.817832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.708 [2024-07-26 05:21:42.817858] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:24:23.708 [2024-07-26 05:21:42.817872] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.967 [2024-07-26 05:21:42.820535] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.967 [2024-07-26 05:21:42.820793] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:23.967 [2024-07-26 05:21:42.820948] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:23.967 [2024-07-26 05:21:42.821020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:23.967 pt1 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.967 05:21:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.226 05:21:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:24.226 "name": "raid_bdev1", 00:24:24.226 "uuid": "585302a4-f968-462e-9c96-744a54cbf1d9", 00:24:24.226 "strip_size_kb": 64, 00:24:24.226 "state": "configuring", 00:24:24.226 "raid_level": "raid5f", 00:24:24.226 "superblock": true, 00:24:24.226 "num_base_bdevs": 4, 00:24:24.226 "num_base_bdevs_discovered": 1, 00:24:24.226 "num_base_bdevs_operational": 4, 00:24:24.226 "base_bdevs_list": [ 00:24:24.226 { 00:24:24.226 "name": "pt1", 00:24:24.226 "uuid": "82449e97-48ec-56a4-ba73-24b3bce689ae", 00:24:24.226 "is_configured": true, 00:24:24.226 "data_offset": 2048, 00:24:24.226 "data_size": 63488 00:24:24.226 }, 00:24:24.226 { 00:24:24.226 "name": null, 00:24:24.226 "uuid": "5b77f0c3-190e-5c34-93b5-40c282ff96b7", 00:24:24.226 "is_configured": false, 00:24:24.226 "data_offset": 2048, 00:24:24.226 "data_size": 63488 00:24:24.226 }, 00:24:24.226 { 00:24:24.226 "name": null, 00:24:24.226 "uuid": "7c63dfa7-1f74-572d-bcf6-c419571f2e66", 00:24:24.226 "is_configured": false, 00:24:24.226 "data_offset": 2048, 00:24:24.226 "data_size": 63488 00:24:24.226 }, 00:24:24.226 { 00:24:24.226 "name": null, 00:24:24.226 "uuid": "68014fd6-7ce4-51cd-98af-72e8ecaa5c2d", 00:24:24.226 "is_configured": false, 00:24:24.226 "data_offset": 2048, 00:24:24.226 "data_size": 63488 00:24:24.226 } 00:24:24.226 ] 00:24:24.226 }' 00:24:24.226 05:21:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:24.226 05:21:43 -- common/autotest_common.sh@10 -- # set +x 00:24:24.484 05:21:43 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:24:24.484 05:21:43 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:24.484 05:21:43 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:24.484 05:21:43 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:24.484 05:21:43 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:24.484 05:21:43 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:24.742 05:21:43 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:24.742 05:21:43 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:24.742 05:21:43 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:25.001 05:21:44 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:25.001 05:21:44 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:25.001 05:21:44 -- bdev/bdev_raid.sh@489 -- # i=3 00:24:25.001 05:21:44 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:25.259 [2024-07-26 05:21:44.162063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:25.259 [2024-07-26 05:21:44.162318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.259 [2024-07-26 05:21:44.162355] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000cc80 00:24:25.259 [2024-07-26 05:21:44.162386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.259 [2024-07-26 05:21:44.162862] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.259 [2024-07-26 05:21:44.162889] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:25.259 [2024-07-26 05:21:44.162976] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:25.259 [2024-07-26 05:21:44.162998] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:25.259 [2024-07-26 05:21:44.163009] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:25.259 [2024-07-26 05:21:44.163092] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c980 name raid_bdev1, state configuring 00:24:25.259 [2024-07-26 05:21:44.163190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:25.259 pt4 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.259 05:21:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.518 05:21:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:25.518 "name": "raid_bdev1", 00:24:25.518 "uuid": "585302a4-f968-462e-9c96-744a54cbf1d9", 00:24:25.518 "strip_size_kb": 64, 00:24:25.518 "state": "configuring", 00:24:25.518 "raid_level": "raid5f", 00:24:25.518 "superblock": true, 00:24:25.518 "num_base_bdevs": 4, 00:24:25.518 "num_base_bdevs_discovered": 1, 00:24:25.518 "num_base_bdevs_operational": 3, 00:24:25.518 "base_bdevs_list": [ 00:24:25.518 { 00:24:25.518 "name": null, 00:24:25.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.518 "is_configured": false, 00:24:25.518 "data_offset": 2048, 00:24:25.518 "data_size": 63488 00:24:25.518 }, 00:24:25.518 { 00:24:25.518 "name": null, 00:24:25.518 "uuid": "5b77f0c3-190e-5c34-93b5-40c282ff96b7", 00:24:25.518 "is_configured": false, 00:24:25.518 "data_offset": 2048, 00:24:25.518 "data_size": 63488 00:24:25.518 }, 00:24:25.518 { 00:24:25.518 "name": null, 00:24:25.518 "uuid": "7c63dfa7-1f74-572d-bcf6-c419571f2e66", 00:24:25.518 "is_configured": false, 00:24:25.518 "data_offset": 2048, 00:24:25.518 "data_size": 63488 00:24:25.518 }, 00:24:25.518 { 00:24:25.518 "name": "pt4", 00:24:25.518 "uuid": "68014fd6-7ce4-51cd-98af-72e8ecaa5c2d", 00:24:25.518 "is_configured": true, 00:24:25.518 "data_offset": 2048, 00:24:25.518 "data_size": 63488 00:24:25.518 } 00:24:25.518 ] 00:24:25.518 }' 00:24:25.518 05:21:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:25.518 05:21:44 -- common/autotest_common.sh@10 -- # set +x 00:24:25.777 05:21:44 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:24:25.777 05:21:44 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:25.777 05:21:44 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:26.035 [2024-07-26 05:21:44.938257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:26.035 [2024-07-26 05:21:44.938551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.035 [2024-07-26 05:21:44.938596] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:24:26.035 [2024-07-26 05:21:44.938610] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.035 [2024-07-26 05:21:44.939151] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.035 [2024-07-26 05:21:44.939212] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:26.035 [2024-07-26 05:21:44.939305] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:26.035 [2024-07-26 05:21:44.939338] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:26.035 pt2 00:24:26.035 05:21:44 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:24:26.035 05:21:44 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:26.035 05:21:44 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:26.294 [2024-07-26 05:21:45.194314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:26.294 [2024-07-26 05:21:45.194534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.294 [2024-07-26 05:21:45.194613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d580 00:24:26.294 [2024-07-26 05:21:45.194889] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.294 [2024-07-26 05:21:45.195372] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.294 [2024-07-26 05:21:45.195520] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:26.294 [2024-07-26 05:21:45.195726] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:26.294 [2024-07-26 05:21:45.195861] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:26.294 [2024-07-26 05:21:45.196056] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:24:26.294 [2024-07-26 05:21:45.196190] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:26.294 [2024-07-26 05:21:45.196344] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:24:26.294 [2024-07-26 05:21:45.201712] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:24:26.294 [2024-07-26 05:21:45.201855] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:24:26.294 [2024-07-26 05:21:45.202252] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.294 pt3 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.294 05:21:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.553 05:21:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:26.553 "name": "raid_bdev1", 00:24:26.553 "uuid": "585302a4-f968-462e-9c96-744a54cbf1d9", 00:24:26.553 "strip_size_kb": 64, 00:24:26.553 "state": "online", 00:24:26.553 "raid_level": "raid5f", 00:24:26.553 "superblock": true, 00:24:26.553 "num_base_bdevs": 4, 00:24:26.553 "num_base_bdevs_discovered": 3, 00:24:26.553 "num_base_bdevs_operational": 3, 00:24:26.553 "base_bdevs_list": [ 00:24:26.553 { 00:24:26.553 "name": null, 00:24:26.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.553 "is_configured": false, 00:24:26.553 "data_offset": 2048, 00:24:26.553 "data_size": 63488 00:24:26.553 }, 00:24:26.553 { 00:24:26.553 "name": "pt2", 00:24:26.553 "uuid": "5b77f0c3-190e-5c34-93b5-40c282ff96b7", 00:24:26.553 "is_configured": true, 00:24:26.553 "data_offset": 2048, 00:24:26.553 "data_size": 63488 00:24:26.553 }, 00:24:26.553 { 00:24:26.553 "name": "pt3", 00:24:26.553 "uuid": "7c63dfa7-1f74-572d-bcf6-c419571f2e66", 00:24:26.553 "is_configured": true, 00:24:26.553 "data_offset": 2048, 00:24:26.553 "data_size": 63488 00:24:26.553 }, 00:24:26.553 { 00:24:26.553 "name": "pt4", 00:24:26.553 "uuid": "68014fd6-7ce4-51cd-98af-72e8ecaa5c2d", 00:24:26.553 "is_configured": true, 00:24:26.553 "data_offset": 2048, 00:24:26.553 "data_size": 63488 00:24:26.553 } 00:24:26.553 ] 00:24:26.553 }' 00:24:26.553 05:21:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:26.553 05:21:45 -- common/autotest_common.sh@10 -- # set +x 00:24:26.812 05:21:45 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:24:26.812 05:21:45 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:26.812 [2024-07-26 05:21:45.908202] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:27.071 05:21:45 -- bdev/bdev_raid.sh@506 -- # '[' 585302a4-f968-462e-9c96-744a54cbf1d9 '!=' 585302a4-f968-462e-9c96-744a54cbf1d9 ']' 00:24:27.071 05:21:45 -- bdev/bdev_raid.sh@511 -- # killprocess 85294 00:24:27.071 05:21:45 -- common/autotest_common.sh@926 -- # '[' -z 85294 ']' 00:24:27.071 05:21:45 -- common/autotest_common.sh@930 -- # kill -0 85294 00:24:27.071 05:21:45 -- common/autotest_common.sh@931 -- # uname 00:24:27.071 05:21:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:27.071 05:21:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85294 00:24:27.071 killing process with pid 85294 00:24:27.071 05:21:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:27.071 05:21:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:27.071 05:21:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85294' 00:24:27.071 05:21:45 -- common/autotest_common.sh@945 -- # kill 85294 00:24:27.071 [2024-07-26 05:21:45.953931] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:27.071 [2024-07-26 05:21:45.954001] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:27.071 05:21:45 -- common/autotest_common.sh@950 -- # wait 85294 00:24:27.071 [2024-07-26 05:21:45.954124] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:27.071 [2024-07-26 05:21:45.954143] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:24:27.330 [2024-07-26 05:21:46.206949] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@513 -- # return 0 00:24:28.268 00:24:28.268 real 0m17.162s 00:24:28.268 user 0m29.746s 00:24:28.268 sys 0m2.604s 00:24:28.268 ************************************ 00:24:28.268 END TEST raid5f_superblock_test 00:24:28.268 ************************************ 00:24:28.268 05:21:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.268 05:21:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:24:28.268 05:21:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:28.268 05:21:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:28.268 05:21:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.268 ************************************ 00:24:28.268 START TEST raid5f_rebuild_test 00:24:28.268 ************************************ 00:24:28.268 05:21:47 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:24:28.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@544 -- # raid_pid=85887 00:24:28.268 05:21:47 -- bdev/bdev_raid.sh@545 -- # waitforlisten 85887 /var/tmp/spdk-raid.sock 00:24:28.269 05:21:47 -- common/autotest_common.sh@819 -- # '[' -z 85887 ']' 00:24:28.269 05:21:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:28.269 05:21:47 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:28.269 05:21:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:28.269 05:21:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:28.269 05:21:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:28.269 05:21:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.269 [2024-07-26 05:21:47.258313] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:28.269 [2024-07-26 05:21:47.258671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:24:28.269 Zero copy mechanism will not be used. 00:24:28.269 :6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85887 ] 00:24:28.528 [2024-07-26 05:21:47.428592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.528 [2024-07-26 05:21:47.575699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.787 [2024-07-26 05:21:47.720101] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:29.355 05:21:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:29.355 05:21:48 -- common/autotest_common.sh@852 -- # return 0 00:24:29.355 05:21:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:29.355 05:21:48 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:29.355 05:21:48 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:29.355 BaseBdev1 00:24:29.355 05:21:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:29.355 05:21:48 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:29.355 05:21:48 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:29.614 BaseBdev2 00:24:29.614 05:21:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:29.614 05:21:48 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:29.614 05:21:48 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:29.872 BaseBdev3 00:24:29.873 05:21:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:29.873 05:21:48 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:29.873 05:21:48 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:30.131 BaseBdev4 00:24:30.131 05:21:49 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:30.406 spare_malloc 00:24:30.406 05:21:49 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:30.677 spare_delay 00:24:30.677 05:21:49 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:30.677 [2024-07-26 05:21:49.731851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:30.677 [2024-07-26 05:21:49.731917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.677 [2024-07-26 05:21:49.731943] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:24:30.677 [2024-07-26 05:21:49.731958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.677 [2024-07-26 05:21:49.734079] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.677 [2024-07-26 05:21:49.734123] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:30.677 spare 00:24:30.677 05:21:49 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:30.936 [2024-07-26 05:21:49.919960] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:30.936 [2024-07-26 05:21:49.922033] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:30.936 [2024-07-26 05:21:49.922101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:30.936 [2024-07-26 05:21:49.922165] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:30.936 [2024-07-26 05:21:49.922249] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:24:30.936 [2024-07-26 05:21:49.922264] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:30.936 [2024-07-26 05:21:49.922444] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:24:30.936 [2024-07-26 05:21:49.928377] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:24:30.936 [2024-07-26 05:21:49.928401] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:24:30.936 [2024-07-26 05:21:49.928621] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.936 05:21:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.195 05:21:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:31.195 "name": "raid_bdev1", 00:24:31.195 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:31.195 "strip_size_kb": 64, 00:24:31.195 "state": "online", 00:24:31.195 "raid_level": "raid5f", 00:24:31.195 "superblock": false, 00:24:31.195 "num_base_bdevs": 4, 00:24:31.195 "num_base_bdevs_discovered": 4, 00:24:31.195 "num_base_bdevs_operational": 4, 00:24:31.195 "base_bdevs_list": [ 00:24:31.195 { 00:24:31.195 "name": "BaseBdev1", 00:24:31.195 "uuid": "92500001-ddbb-47d2-93a1-a883f40424ae", 00:24:31.195 "is_configured": true, 00:24:31.195 "data_offset": 0, 00:24:31.195 "data_size": 65536 00:24:31.195 }, 00:24:31.195 { 00:24:31.195 "name": "BaseBdev2", 00:24:31.195 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:31.195 "is_configured": true, 00:24:31.195 "data_offset": 0, 00:24:31.195 "data_size": 65536 00:24:31.195 }, 00:24:31.195 { 00:24:31.195 "name": "BaseBdev3", 00:24:31.195 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:31.195 "is_configured": true, 00:24:31.195 "data_offset": 0, 00:24:31.195 "data_size": 65536 00:24:31.195 }, 00:24:31.195 { 00:24:31.195 "name": "BaseBdev4", 00:24:31.195 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:31.195 "is_configured": true, 00:24:31.195 "data_offset": 0, 00:24:31.195 "data_size": 65536 00:24:31.195 } 00:24:31.195 ] 00:24:31.195 }' 00:24:31.195 05:21:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:31.195 05:21:50 -- common/autotest_common.sh@10 -- # set +x 00:24:31.454 05:21:50 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:31.454 05:21:50 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:31.713 [2024-07-26 05:21:50.618518] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.713 05:21:50 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:24:31.713 05:21:50 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.713 05:21:50 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:31.971 05:21:50 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:31.971 05:21:50 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:31.972 05:21:50 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:31.972 05:21:50 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:31.972 05:21:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:31.972 05:21:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:31.972 05:21:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:31.972 05:21:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:31.972 05:21:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:31.972 05:21:50 -- bdev/nbd_common.sh@12 -- # local i 00:24:31.972 05:21:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:31.972 05:21:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:31.972 05:21:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:31.972 [2024-07-26 05:21:50.994505] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:24:31.972 /dev/nbd0 00:24:31.972 05:21:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:31.972 05:21:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:31.972 05:21:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:31.972 05:21:51 -- common/autotest_common.sh@857 -- # local i 00:24:31.972 05:21:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:31.972 05:21:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:31.972 05:21:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:31.972 05:21:51 -- common/autotest_common.sh@861 -- # break 00:24:31.972 05:21:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:31.972 05:21:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:31.972 05:21:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:31.972 1+0 records in 00:24:31.972 1+0 records out 00:24:31.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234081 s, 17.5 MB/s 00:24:31.972 05:21:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:31.972 05:21:51 -- common/autotest_common.sh@874 -- # size=4096 00:24:31.972 05:21:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:31.972 05:21:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:31.972 05:21:51 -- common/autotest_common.sh@877 -- # return 0 00:24:31.972 05:21:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:31.972 05:21:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:31.972 05:21:51 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:31.972 05:21:51 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:24:31.972 05:21:51 -- bdev/bdev_raid.sh@582 -- # echo 192 00:24:31.972 05:21:51 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:24:32.539 512+0 records in 00:24:32.539 512+0 records out 00:24:32.539 100663296 bytes (101 MB, 96 MiB) copied, 0.517425 s, 195 MB/s 00:24:32.539 05:21:51 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:32.539 05:21:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:32.539 05:21:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:32.539 05:21:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:32.539 05:21:51 -- bdev/nbd_common.sh@51 -- # local i 00:24:32.539 05:21:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:32.539 05:21:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:32.798 [2024-07-26 05:21:51.803203] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.798 05:21:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:32.798 05:21:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:32.798 05:21:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:32.798 05:21:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:32.798 05:21:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:32.798 05:21:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:32.798 05:21:51 -- bdev/nbd_common.sh@41 -- # break 00:24:32.798 05:21:51 -- bdev/nbd_common.sh@45 -- # return 0 00:24:32.798 05:21:51 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:33.057 [2024-07-26 05:21:52.066614] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.057 05:21:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.316 05:21:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:33.316 "name": "raid_bdev1", 00:24:33.316 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:33.316 "strip_size_kb": 64, 00:24:33.316 "state": "online", 00:24:33.316 "raid_level": "raid5f", 00:24:33.316 "superblock": false, 00:24:33.316 "num_base_bdevs": 4, 00:24:33.316 "num_base_bdevs_discovered": 3, 00:24:33.316 "num_base_bdevs_operational": 3, 00:24:33.316 "base_bdevs_list": [ 00:24:33.316 { 00:24:33.316 "name": null, 00:24:33.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.316 "is_configured": false, 00:24:33.316 "data_offset": 0, 00:24:33.316 "data_size": 65536 00:24:33.316 }, 00:24:33.316 { 00:24:33.316 "name": "BaseBdev2", 00:24:33.316 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:33.316 "is_configured": true, 00:24:33.316 "data_offset": 0, 00:24:33.316 "data_size": 65536 00:24:33.316 }, 00:24:33.316 { 00:24:33.316 "name": "BaseBdev3", 00:24:33.316 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:33.316 "is_configured": true, 00:24:33.316 "data_offset": 0, 00:24:33.316 "data_size": 65536 00:24:33.316 }, 00:24:33.316 { 00:24:33.316 "name": "BaseBdev4", 00:24:33.316 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:33.316 "is_configured": true, 00:24:33.316 "data_offset": 0, 00:24:33.316 "data_size": 65536 00:24:33.316 } 00:24:33.316 ] 00:24:33.316 }' 00:24:33.316 05:21:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:33.316 05:21:52 -- common/autotest_common.sh@10 -- # set +x 00:24:33.575 05:21:52 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:33.834 [2024-07-26 05:21:52.738757] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:33.834 [2024-07-26 05:21:52.738847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:33.834 [2024-07-26 05:21:52.749514] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b000 00:24:33.834 [2024-07-26 05:21:52.756880] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:33.834 05:21:52 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:34.784 05:21:53 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:34.784 05:21:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:34.784 05:21:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:34.784 05:21:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:34.784 05:21:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:34.784 05:21:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.784 05:21:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.042 05:21:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:35.042 "name": "raid_bdev1", 00:24:35.042 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:35.042 "strip_size_kb": 64, 00:24:35.042 "state": "online", 00:24:35.042 "raid_level": "raid5f", 00:24:35.042 "superblock": false, 00:24:35.042 "num_base_bdevs": 4, 00:24:35.042 "num_base_bdevs_discovered": 4, 00:24:35.042 "num_base_bdevs_operational": 4, 00:24:35.042 "process": { 00:24:35.042 "type": "rebuild", 00:24:35.042 "target": "spare", 00:24:35.042 "progress": { 00:24:35.042 "blocks": 23040, 00:24:35.042 "percent": 11 00:24:35.042 } 00:24:35.042 }, 00:24:35.042 "base_bdevs_list": [ 00:24:35.042 { 00:24:35.042 "name": "spare", 00:24:35.042 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:35.042 "is_configured": true, 00:24:35.042 "data_offset": 0, 00:24:35.042 "data_size": 65536 00:24:35.042 }, 00:24:35.042 { 00:24:35.042 "name": "BaseBdev2", 00:24:35.042 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:35.042 "is_configured": true, 00:24:35.042 "data_offset": 0, 00:24:35.042 "data_size": 65536 00:24:35.042 }, 00:24:35.042 { 00:24:35.042 "name": "BaseBdev3", 00:24:35.043 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:35.043 "is_configured": true, 00:24:35.043 "data_offset": 0, 00:24:35.043 "data_size": 65536 00:24:35.043 }, 00:24:35.043 { 00:24:35.043 "name": "BaseBdev4", 00:24:35.043 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:35.043 "is_configured": true, 00:24:35.043 "data_offset": 0, 00:24:35.043 "data_size": 65536 00:24:35.043 } 00:24:35.043 ] 00:24:35.043 }' 00:24:35.043 05:21:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:35.043 05:21:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:35.043 05:21:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:35.043 05:21:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:35.043 05:21:54 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:35.302 [2024-07-26 05:21:54.278068] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:35.302 [2024-07-26 05:21:54.366623] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:35.302 [2024-07-26 05:21:54.366690] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.302 05:21:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.561 05:21:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:35.561 "name": "raid_bdev1", 00:24:35.561 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:35.561 "strip_size_kb": 64, 00:24:35.561 "state": "online", 00:24:35.561 "raid_level": "raid5f", 00:24:35.561 "superblock": false, 00:24:35.561 "num_base_bdevs": 4, 00:24:35.561 "num_base_bdevs_discovered": 3, 00:24:35.561 "num_base_bdevs_operational": 3, 00:24:35.561 "base_bdevs_list": [ 00:24:35.561 { 00:24:35.561 "name": null, 00:24:35.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.561 "is_configured": false, 00:24:35.561 "data_offset": 0, 00:24:35.561 "data_size": 65536 00:24:35.561 }, 00:24:35.561 { 00:24:35.561 "name": "BaseBdev2", 00:24:35.561 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:35.561 "is_configured": true, 00:24:35.561 "data_offset": 0, 00:24:35.561 "data_size": 65536 00:24:35.561 }, 00:24:35.561 { 00:24:35.561 "name": "BaseBdev3", 00:24:35.561 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:35.561 "is_configured": true, 00:24:35.561 "data_offset": 0, 00:24:35.561 "data_size": 65536 00:24:35.561 }, 00:24:35.561 { 00:24:35.561 "name": "BaseBdev4", 00:24:35.561 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:35.561 "is_configured": true, 00:24:35.561 "data_offset": 0, 00:24:35.561 "data_size": 65536 00:24:35.561 } 00:24:35.561 ] 00:24:35.561 }' 00:24:35.561 05:21:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:35.561 05:21:54 -- common/autotest_common.sh@10 -- # set +x 00:24:36.129 05:21:54 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:36.129 05:21:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:36.129 05:21:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:36.129 05:21:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:36.129 05:21:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:36.129 05:21:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.129 05:21:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.129 05:21:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:36.129 "name": "raid_bdev1", 00:24:36.129 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:36.129 "strip_size_kb": 64, 00:24:36.129 "state": "online", 00:24:36.129 "raid_level": "raid5f", 00:24:36.129 "superblock": false, 00:24:36.129 "num_base_bdevs": 4, 00:24:36.129 "num_base_bdevs_discovered": 3, 00:24:36.129 "num_base_bdevs_operational": 3, 00:24:36.129 "base_bdevs_list": [ 00:24:36.129 { 00:24:36.129 "name": null, 00:24:36.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.129 "is_configured": false, 00:24:36.129 "data_offset": 0, 00:24:36.129 "data_size": 65536 00:24:36.129 }, 00:24:36.129 { 00:24:36.129 "name": "BaseBdev2", 00:24:36.129 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:36.129 "is_configured": true, 00:24:36.129 "data_offset": 0, 00:24:36.129 "data_size": 65536 00:24:36.129 }, 00:24:36.129 { 00:24:36.129 "name": "BaseBdev3", 00:24:36.129 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:36.129 "is_configured": true, 00:24:36.129 "data_offset": 0, 00:24:36.129 "data_size": 65536 00:24:36.129 }, 00:24:36.129 { 00:24:36.129 "name": "BaseBdev4", 00:24:36.129 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:36.129 "is_configured": true, 00:24:36.129 "data_offset": 0, 00:24:36.129 "data_size": 65536 00:24:36.129 } 00:24:36.129 ] 00:24:36.129 }' 00:24:36.129 05:21:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:36.129 05:21:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:36.129 05:21:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:36.129 05:21:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:36.129 05:21:55 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:36.387 [2024-07-26 05:21:55.437929] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:36.387 [2024-07-26 05:21:55.438241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:36.387 [2024-07-26 05:21:55.447858] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b0d0 00:24:36.387 [2024-07-26 05:21:55.455301] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:36.388 05:21:55 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:37.763 "name": "raid_bdev1", 00:24:37.763 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:37.763 "strip_size_kb": 64, 00:24:37.763 "state": "online", 00:24:37.763 "raid_level": "raid5f", 00:24:37.763 "superblock": false, 00:24:37.763 "num_base_bdevs": 4, 00:24:37.763 "num_base_bdevs_discovered": 4, 00:24:37.763 "num_base_bdevs_operational": 4, 00:24:37.763 "process": { 00:24:37.763 "type": "rebuild", 00:24:37.763 "target": "spare", 00:24:37.763 "progress": { 00:24:37.763 "blocks": 21120, 00:24:37.763 "percent": 10 00:24:37.763 } 00:24:37.763 }, 00:24:37.763 "base_bdevs_list": [ 00:24:37.763 { 00:24:37.763 "name": "spare", 00:24:37.763 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:37.763 "is_configured": true, 00:24:37.763 "data_offset": 0, 00:24:37.763 "data_size": 65536 00:24:37.763 }, 00:24:37.763 { 00:24:37.763 "name": "BaseBdev2", 00:24:37.763 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:37.763 "is_configured": true, 00:24:37.763 "data_offset": 0, 00:24:37.763 "data_size": 65536 00:24:37.763 }, 00:24:37.763 { 00:24:37.763 "name": "BaseBdev3", 00:24:37.763 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:37.763 "is_configured": true, 00:24:37.763 "data_offset": 0, 00:24:37.763 "data_size": 65536 00:24:37.763 }, 00:24:37.763 { 00:24:37.763 "name": "BaseBdev4", 00:24:37.763 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:37.763 "is_configured": true, 00:24:37.763 "data_offset": 0, 00:24:37.763 "data_size": 65536 00:24:37.763 } 00:24:37.763 ] 00:24:37.763 }' 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@657 -- # local timeout=623 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.763 05:21:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.022 05:21:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:38.022 "name": "raid_bdev1", 00:24:38.022 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:38.022 "strip_size_kb": 64, 00:24:38.022 "state": "online", 00:24:38.022 "raid_level": "raid5f", 00:24:38.022 "superblock": false, 00:24:38.022 "num_base_bdevs": 4, 00:24:38.022 "num_base_bdevs_discovered": 4, 00:24:38.022 "num_base_bdevs_operational": 4, 00:24:38.022 "process": { 00:24:38.022 "type": "rebuild", 00:24:38.022 "target": "spare", 00:24:38.022 "progress": { 00:24:38.022 "blocks": 26880, 00:24:38.022 "percent": 13 00:24:38.022 } 00:24:38.022 }, 00:24:38.022 "base_bdevs_list": [ 00:24:38.022 { 00:24:38.022 "name": "spare", 00:24:38.022 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:38.022 "is_configured": true, 00:24:38.022 "data_offset": 0, 00:24:38.022 "data_size": 65536 00:24:38.022 }, 00:24:38.022 { 00:24:38.022 "name": "BaseBdev2", 00:24:38.022 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:38.022 "is_configured": true, 00:24:38.022 "data_offset": 0, 00:24:38.022 "data_size": 65536 00:24:38.022 }, 00:24:38.022 { 00:24:38.022 "name": "BaseBdev3", 00:24:38.022 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:38.022 "is_configured": true, 00:24:38.022 "data_offset": 0, 00:24:38.022 "data_size": 65536 00:24:38.022 }, 00:24:38.022 { 00:24:38.022 "name": "BaseBdev4", 00:24:38.022 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:38.022 "is_configured": true, 00:24:38.022 "data_offset": 0, 00:24:38.022 "data_size": 65536 00:24:38.022 } 00:24:38.022 ] 00:24:38.022 }' 00:24:38.022 05:21:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:38.022 05:21:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:38.022 05:21:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:38.022 05:21:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:38.022 05:21:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:38.958 05:21:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:38.958 05:21:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:38.958 05:21:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:38.958 05:21:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:38.958 05:21:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:38.958 05:21:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:38.958 05:21:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.958 05:21:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.216 05:21:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:39.216 "name": "raid_bdev1", 00:24:39.216 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:39.216 "strip_size_kb": 64, 00:24:39.216 "state": "online", 00:24:39.216 "raid_level": "raid5f", 00:24:39.216 "superblock": false, 00:24:39.216 "num_base_bdevs": 4, 00:24:39.216 "num_base_bdevs_discovered": 4, 00:24:39.216 "num_base_bdevs_operational": 4, 00:24:39.217 "process": { 00:24:39.217 "type": "rebuild", 00:24:39.217 "target": "spare", 00:24:39.217 "progress": { 00:24:39.217 "blocks": 49920, 00:24:39.217 "percent": 25 00:24:39.217 } 00:24:39.217 }, 00:24:39.217 "base_bdevs_list": [ 00:24:39.217 { 00:24:39.217 "name": "spare", 00:24:39.217 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:39.217 "is_configured": true, 00:24:39.217 "data_offset": 0, 00:24:39.217 "data_size": 65536 00:24:39.217 }, 00:24:39.217 { 00:24:39.217 "name": "BaseBdev2", 00:24:39.217 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:39.217 "is_configured": true, 00:24:39.217 "data_offset": 0, 00:24:39.217 "data_size": 65536 00:24:39.217 }, 00:24:39.217 { 00:24:39.217 "name": "BaseBdev3", 00:24:39.217 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:39.217 "is_configured": true, 00:24:39.217 "data_offset": 0, 00:24:39.217 "data_size": 65536 00:24:39.217 }, 00:24:39.217 { 00:24:39.217 "name": "BaseBdev4", 00:24:39.217 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:39.217 "is_configured": true, 00:24:39.217 "data_offset": 0, 00:24:39.217 "data_size": 65536 00:24:39.217 } 00:24:39.217 ] 00:24:39.217 }' 00:24:39.217 05:21:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:39.217 05:21:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:39.217 05:21:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:39.217 05:21:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.217 05:21:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:40.152 05:21:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:40.153 05:21:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:40.153 05:21:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:40.153 05:21:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:40.153 05:21:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:40.153 05:21:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:40.153 05:21:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.153 05:21:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.411 05:21:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:40.411 "name": "raid_bdev1", 00:24:40.411 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:40.411 "strip_size_kb": 64, 00:24:40.411 "state": "online", 00:24:40.411 "raid_level": "raid5f", 00:24:40.411 "superblock": false, 00:24:40.411 "num_base_bdevs": 4, 00:24:40.411 "num_base_bdevs_discovered": 4, 00:24:40.412 "num_base_bdevs_operational": 4, 00:24:40.412 "process": { 00:24:40.412 "type": "rebuild", 00:24:40.412 "target": "spare", 00:24:40.412 "progress": { 00:24:40.412 "blocks": 74880, 00:24:40.412 "percent": 38 00:24:40.412 } 00:24:40.412 }, 00:24:40.412 "base_bdevs_list": [ 00:24:40.412 { 00:24:40.412 "name": "spare", 00:24:40.412 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:40.412 "is_configured": true, 00:24:40.412 "data_offset": 0, 00:24:40.412 "data_size": 65536 00:24:40.412 }, 00:24:40.412 { 00:24:40.412 "name": "BaseBdev2", 00:24:40.412 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:40.412 "is_configured": true, 00:24:40.412 "data_offset": 0, 00:24:40.412 "data_size": 65536 00:24:40.412 }, 00:24:40.412 { 00:24:40.412 "name": "BaseBdev3", 00:24:40.412 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:40.412 "is_configured": true, 00:24:40.412 "data_offset": 0, 00:24:40.412 "data_size": 65536 00:24:40.412 }, 00:24:40.412 { 00:24:40.412 "name": "BaseBdev4", 00:24:40.412 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:40.412 "is_configured": true, 00:24:40.412 "data_offset": 0, 00:24:40.412 "data_size": 65536 00:24:40.412 } 00:24:40.412 ] 00:24:40.412 }' 00:24:40.412 05:21:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:40.412 05:21:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:40.412 05:21:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:40.412 05:21:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:40.412 05:21:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:41.348 05:22:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:41.348 05:22:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.348 05:22:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:41.348 05:22:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:41.348 05:22:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:41.348 05:22:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:41.348 05:22:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.348 05:22:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.608 05:22:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:41.608 "name": "raid_bdev1", 00:24:41.608 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:41.608 "strip_size_kb": 64, 00:24:41.608 "state": "online", 00:24:41.608 "raid_level": "raid5f", 00:24:41.608 "superblock": false, 00:24:41.608 "num_base_bdevs": 4, 00:24:41.608 "num_base_bdevs_discovered": 4, 00:24:41.608 "num_base_bdevs_operational": 4, 00:24:41.608 "process": { 00:24:41.608 "type": "rebuild", 00:24:41.608 "target": "spare", 00:24:41.608 "progress": { 00:24:41.608 "blocks": 97920, 00:24:41.608 "percent": 49 00:24:41.608 } 00:24:41.608 }, 00:24:41.608 "base_bdevs_list": [ 00:24:41.608 { 00:24:41.608 "name": "spare", 00:24:41.608 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:41.608 "is_configured": true, 00:24:41.608 "data_offset": 0, 00:24:41.608 "data_size": 65536 00:24:41.608 }, 00:24:41.608 { 00:24:41.608 "name": "BaseBdev2", 00:24:41.608 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:41.608 "is_configured": true, 00:24:41.608 "data_offset": 0, 00:24:41.608 "data_size": 65536 00:24:41.608 }, 00:24:41.608 { 00:24:41.608 "name": "BaseBdev3", 00:24:41.608 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:41.608 "is_configured": true, 00:24:41.608 "data_offset": 0, 00:24:41.608 "data_size": 65536 00:24:41.608 }, 00:24:41.608 { 00:24:41.608 "name": "BaseBdev4", 00:24:41.608 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:41.608 "is_configured": true, 00:24:41.608 "data_offset": 0, 00:24:41.608 "data_size": 65536 00:24:41.608 } 00:24:41.608 ] 00:24:41.608 }' 00:24:41.608 05:22:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:41.608 05:22:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.608 05:22:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:41.608 05:22:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.608 05:22:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:42.986 05:22:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:42.986 05:22:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:42.986 05:22:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:42.986 05:22:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:42.986 05:22:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:42.986 05:22:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:42.986 05:22:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.986 05:22:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.986 05:22:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:42.986 "name": "raid_bdev1", 00:24:42.986 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:42.986 "strip_size_kb": 64, 00:24:42.986 "state": "online", 00:24:42.986 "raid_level": "raid5f", 00:24:42.986 "superblock": false, 00:24:42.986 "num_base_bdevs": 4, 00:24:42.986 "num_base_bdevs_discovered": 4, 00:24:42.986 "num_base_bdevs_operational": 4, 00:24:42.986 "process": { 00:24:42.986 "type": "rebuild", 00:24:42.986 "target": "spare", 00:24:42.986 "progress": { 00:24:42.986 "blocks": 122880, 00:24:42.986 "percent": 62 00:24:42.986 } 00:24:42.986 }, 00:24:42.986 "base_bdevs_list": [ 00:24:42.986 { 00:24:42.986 "name": "spare", 00:24:42.986 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:42.986 "is_configured": true, 00:24:42.986 "data_offset": 0, 00:24:42.986 "data_size": 65536 00:24:42.986 }, 00:24:42.986 { 00:24:42.986 "name": "BaseBdev2", 00:24:42.986 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:42.986 "is_configured": true, 00:24:42.986 "data_offset": 0, 00:24:42.986 "data_size": 65536 00:24:42.986 }, 00:24:42.986 { 00:24:42.986 "name": "BaseBdev3", 00:24:42.986 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:42.986 "is_configured": true, 00:24:42.986 "data_offset": 0, 00:24:42.986 "data_size": 65536 00:24:42.986 }, 00:24:42.986 { 00:24:42.986 "name": "BaseBdev4", 00:24:42.986 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:42.986 "is_configured": true, 00:24:42.986 "data_offset": 0, 00:24:42.986 "data_size": 65536 00:24:42.986 } 00:24:42.986 ] 00:24:42.986 }' 00:24:42.987 05:22:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:42.987 05:22:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:42.987 05:22:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:42.987 05:22:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.987 05:22:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:43.924 05:22:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:43.924 05:22:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.924 05:22:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:43.924 05:22:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:43.924 05:22:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:43.924 05:22:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:43.924 05:22:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.924 05:22:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.183 05:22:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:44.183 "name": "raid_bdev1", 00:24:44.183 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:44.183 "strip_size_kb": 64, 00:24:44.183 "state": "online", 00:24:44.183 "raid_level": "raid5f", 00:24:44.183 "superblock": false, 00:24:44.183 "num_base_bdevs": 4, 00:24:44.183 "num_base_bdevs_discovered": 4, 00:24:44.183 "num_base_bdevs_operational": 4, 00:24:44.183 "process": { 00:24:44.183 "type": "rebuild", 00:24:44.183 "target": "spare", 00:24:44.183 "progress": { 00:24:44.183 "blocks": 145920, 00:24:44.183 "percent": 74 00:24:44.183 } 00:24:44.183 }, 00:24:44.183 "base_bdevs_list": [ 00:24:44.183 { 00:24:44.183 "name": "spare", 00:24:44.183 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:44.183 "is_configured": true, 00:24:44.183 "data_offset": 0, 00:24:44.183 "data_size": 65536 00:24:44.183 }, 00:24:44.183 { 00:24:44.183 "name": "BaseBdev2", 00:24:44.183 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:44.183 "is_configured": true, 00:24:44.183 "data_offset": 0, 00:24:44.183 "data_size": 65536 00:24:44.183 }, 00:24:44.183 { 00:24:44.183 "name": "BaseBdev3", 00:24:44.183 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:44.183 "is_configured": true, 00:24:44.183 "data_offset": 0, 00:24:44.183 "data_size": 65536 00:24:44.183 }, 00:24:44.183 { 00:24:44.183 "name": "BaseBdev4", 00:24:44.183 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:44.183 "is_configured": true, 00:24:44.183 "data_offset": 0, 00:24:44.183 "data_size": 65536 00:24:44.183 } 00:24:44.183 ] 00:24:44.183 }' 00:24:44.183 05:22:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:44.183 05:22:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:44.183 05:22:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:44.183 05:22:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:44.183 05:22:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:45.119 05:22:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:45.119 05:22:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.119 05:22:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:45.119 05:22:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:45.119 05:22:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:45.119 05:22:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:45.119 05:22:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.119 05:22:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.378 05:22:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:45.378 "name": "raid_bdev1", 00:24:45.378 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:45.378 "strip_size_kb": 64, 00:24:45.378 "state": "online", 00:24:45.378 "raid_level": "raid5f", 00:24:45.378 "superblock": false, 00:24:45.378 "num_base_bdevs": 4, 00:24:45.378 "num_base_bdevs_discovered": 4, 00:24:45.378 "num_base_bdevs_operational": 4, 00:24:45.378 "process": { 00:24:45.378 "type": "rebuild", 00:24:45.378 "target": "spare", 00:24:45.378 "progress": { 00:24:45.378 "blocks": 170880, 00:24:45.378 "percent": 86 00:24:45.378 } 00:24:45.378 }, 00:24:45.378 "base_bdevs_list": [ 00:24:45.378 { 00:24:45.378 "name": "spare", 00:24:45.378 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:45.378 "is_configured": true, 00:24:45.378 "data_offset": 0, 00:24:45.378 "data_size": 65536 00:24:45.378 }, 00:24:45.378 { 00:24:45.378 "name": "BaseBdev2", 00:24:45.378 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:45.378 "is_configured": true, 00:24:45.378 "data_offset": 0, 00:24:45.378 "data_size": 65536 00:24:45.378 }, 00:24:45.378 { 00:24:45.378 "name": "BaseBdev3", 00:24:45.378 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:45.378 "is_configured": true, 00:24:45.378 "data_offset": 0, 00:24:45.378 "data_size": 65536 00:24:45.378 }, 00:24:45.378 { 00:24:45.378 "name": "BaseBdev4", 00:24:45.378 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:45.378 "is_configured": true, 00:24:45.378 "data_offset": 0, 00:24:45.378 "data_size": 65536 00:24:45.378 } 00:24:45.378 ] 00:24:45.378 }' 00:24:45.378 05:22:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:45.378 05:22:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.378 05:22:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:45.378 05:22:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.378 05:22:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:46.787 "name": "raid_bdev1", 00:24:46.787 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:46.787 "strip_size_kb": 64, 00:24:46.787 "state": "online", 00:24:46.787 "raid_level": "raid5f", 00:24:46.787 "superblock": false, 00:24:46.787 "num_base_bdevs": 4, 00:24:46.787 "num_base_bdevs_discovered": 4, 00:24:46.787 "num_base_bdevs_operational": 4, 00:24:46.787 "process": { 00:24:46.787 "type": "rebuild", 00:24:46.787 "target": "spare", 00:24:46.787 "progress": { 00:24:46.787 "blocks": 193920, 00:24:46.787 "percent": 98 00:24:46.787 } 00:24:46.787 }, 00:24:46.787 "base_bdevs_list": [ 00:24:46.787 { 00:24:46.787 "name": "spare", 00:24:46.787 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:46.787 "is_configured": true, 00:24:46.787 "data_offset": 0, 00:24:46.787 "data_size": 65536 00:24:46.787 }, 00:24:46.787 { 00:24:46.787 "name": "BaseBdev2", 00:24:46.787 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:46.787 "is_configured": true, 00:24:46.787 "data_offset": 0, 00:24:46.787 "data_size": 65536 00:24:46.787 }, 00:24:46.787 { 00:24:46.787 "name": "BaseBdev3", 00:24:46.787 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:46.787 "is_configured": true, 00:24:46.787 "data_offset": 0, 00:24:46.787 "data_size": 65536 00:24:46.787 }, 00:24:46.787 { 00:24:46.787 "name": "BaseBdev4", 00:24:46.787 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:46.787 "is_configured": true, 00:24:46.787 "data_offset": 0, 00:24:46.787 "data_size": 65536 00:24:46.787 } 00:24:46.787 ] 00:24:46.787 }' 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:46.787 05:22:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:46.787 [2024-07-26 05:22:05.817046] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:46.787 [2024-07-26 05:22:05.817113] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:46.787 [2024-07-26 05:22:05.817163] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.722 05:22:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:47.722 05:22:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.722 05:22:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:47.722 05:22:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:47.722 05:22:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:47.722 05:22:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:47.722 05:22:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.722 05:22:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:47.981 "name": "raid_bdev1", 00:24:47.981 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:47.981 "strip_size_kb": 64, 00:24:47.981 "state": "online", 00:24:47.981 "raid_level": "raid5f", 00:24:47.981 "superblock": false, 00:24:47.981 "num_base_bdevs": 4, 00:24:47.981 "num_base_bdevs_discovered": 4, 00:24:47.981 "num_base_bdevs_operational": 4, 00:24:47.981 "base_bdevs_list": [ 00:24:47.981 { 00:24:47.981 "name": "spare", 00:24:47.981 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:47.981 "is_configured": true, 00:24:47.981 "data_offset": 0, 00:24:47.981 "data_size": 65536 00:24:47.981 }, 00:24:47.981 { 00:24:47.981 "name": "BaseBdev2", 00:24:47.981 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:47.981 "is_configured": true, 00:24:47.981 "data_offset": 0, 00:24:47.981 "data_size": 65536 00:24:47.981 }, 00:24:47.981 { 00:24:47.981 "name": "BaseBdev3", 00:24:47.981 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:47.981 "is_configured": true, 00:24:47.981 "data_offset": 0, 00:24:47.981 "data_size": 65536 00:24:47.981 }, 00:24:47.981 { 00:24:47.981 "name": "BaseBdev4", 00:24:47.981 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:47.981 "is_configured": true, 00:24:47.981 "data_offset": 0, 00:24:47.981 "data_size": 65536 00:24:47.981 } 00:24:47.981 ] 00:24:47.981 }' 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@660 -- # break 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.981 05:22:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:48.241 "name": "raid_bdev1", 00:24:48.241 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:48.241 "strip_size_kb": 64, 00:24:48.241 "state": "online", 00:24:48.241 "raid_level": "raid5f", 00:24:48.241 "superblock": false, 00:24:48.241 "num_base_bdevs": 4, 00:24:48.241 "num_base_bdevs_discovered": 4, 00:24:48.241 "num_base_bdevs_operational": 4, 00:24:48.241 "base_bdevs_list": [ 00:24:48.241 { 00:24:48.241 "name": "spare", 00:24:48.241 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:48.241 "is_configured": true, 00:24:48.241 "data_offset": 0, 00:24:48.241 "data_size": 65536 00:24:48.241 }, 00:24:48.241 { 00:24:48.241 "name": "BaseBdev2", 00:24:48.241 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:48.241 "is_configured": true, 00:24:48.241 "data_offset": 0, 00:24:48.241 "data_size": 65536 00:24:48.241 }, 00:24:48.241 { 00:24:48.241 "name": "BaseBdev3", 00:24:48.241 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:48.241 "is_configured": true, 00:24:48.241 "data_offset": 0, 00:24:48.241 "data_size": 65536 00:24:48.241 }, 00:24:48.241 { 00:24:48.241 "name": "BaseBdev4", 00:24:48.241 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:48.241 "is_configured": true, 00:24:48.241 "data_offset": 0, 00:24:48.241 "data_size": 65536 00:24:48.241 } 00:24:48.241 ] 00:24:48.241 }' 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.241 05:22:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.500 05:22:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:48.500 "name": "raid_bdev1", 00:24:48.500 "uuid": "4a5a7d6c-b41b-4e53-821b-98c5c80a3c98", 00:24:48.500 "strip_size_kb": 64, 00:24:48.500 "state": "online", 00:24:48.500 "raid_level": "raid5f", 00:24:48.500 "superblock": false, 00:24:48.500 "num_base_bdevs": 4, 00:24:48.500 "num_base_bdevs_discovered": 4, 00:24:48.500 "num_base_bdevs_operational": 4, 00:24:48.500 "base_bdevs_list": [ 00:24:48.500 { 00:24:48.500 "name": "spare", 00:24:48.500 "uuid": "487fb768-f265-5f65-a298-a5993c1f4207", 00:24:48.500 "is_configured": true, 00:24:48.500 "data_offset": 0, 00:24:48.500 "data_size": 65536 00:24:48.500 }, 00:24:48.500 { 00:24:48.500 "name": "BaseBdev2", 00:24:48.500 "uuid": "5631cc15-814c-4442-86bf-fa837f938a3b", 00:24:48.500 "is_configured": true, 00:24:48.500 "data_offset": 0, 00:24:48.500 "data_size": 65536 00:24:48.500 }, 00:24:48.500 { 00:24:48.500 "name": "BaseBdev3", 00:24:48.500 "uuid": "2a919fc8-7104-4449-9126-fcff4fbf9532", 00:24:48.500 "is_configured": true, 00:24:48.500 "data_offset": 0, 00:24:48.500 "data_size": 65536 00:24:48.500 }, 00:24:48.500 { 00:24:48.500 "name": "BaseBdev4", 00:24:48.500 "uuid": "afc48645-28fb-4067-8edc-131746eaf81a", 00:24:48.500 "is_configured": true, 00:24:48.500 "data_offset": 0, 00:24:48.500 "data_size": 65536 00:24:48.500 } 00:24:48.500 ] 00:24:48.500 }' 00:24:48.500 05:22:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:48.500 05:22:07 -- common/autotest_common.sh@10 -- # set +x 00:24:48.759 05:22:07 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:49.018 [2024-07-26 05:22:07.920862] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:49.018 [2024-07-26 05:22:07.920898] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:49.018 [2024-07-26 05:22:07.920977] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:49.018 [2024-07-26 05:22:07.921092] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:49.018 [2024-07-26 05:22:07.921109] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:24:49.018 05:22:07 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.018 05:22:07 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:49.277 05:22:08 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:49.277 05:22:08 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:49.277 05:22:08 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:49.277 05:22:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:49.277 05:22:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:49.277 05:22:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:49.277 05:22:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:49.277 05:22:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:49.277 05:22:08 -- bdev/nbd_common.sh@12 -- # local i 00:24:49.277 05:22:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:49.277 05:22:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:49.277 05:22:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:49.536 /dev/nbd0 00:24:49.536 05:22:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:49.536 05:22:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:49.536 05:22:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:49.536 05:22:08 -- common/autotest_common.sh@857 -- # local i 00:24:49.536 05:22:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:49.536 05:22:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:49.536 05:22:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:49.536 05:22:08 -- common/autotest_common.sh@861 -- # break 00:24:49.536 05:22:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:49.536 05:22:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:49.536 05:22:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.536 1+0 records in 00:24:49.536 1+0 records out 00:24:49.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285555 s, 14.3 MB/s 00:24:49.536 05:22:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.536 05:22:08 -- common/autotest_common.sh@874 -- # size=4096 00:24:49.536 05:22:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.536 05:22:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:49.536 05:22:08 -- common/autotest_common.sh@877 -- # return 0 00:24:49.536 05:22:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.536 05:22:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:49.536 05:22:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:49.795 /dev/nbd1 00:24:49.795 05:22:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:49.795 05:22:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:49.795 05:22:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:49.795 05:22:08 -- common/autotest_common.sh@857 -- # local i 00:24:49.795 05:22:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:49.795 05:22:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:49.795 05:22:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:49.795 05:22:08 -- common/autotest_common.sh@861 -- # break 00:24:49.795 05:22:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:49.795 05:22:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:49.795 05:22:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.795 1+0 records in 00:24:49.795 1+0 records out 00:24:49.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357889 s, 11.4 MB/s 00:24:49.795 05:22:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.795 05:22:08 -- common/autotest_common.sh@874 -- # size=4096 00:24:49.795 05:22:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.795 05:22:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:49.795 05:22:08 -- common/autotest_common.sh@877 -- # return 0 00:24:49.795 05:22:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.796 05:22:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:49.796 05:22:08 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:49.796 05:22:08 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:49.796 05:22:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:49.796 05:22:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:49.796 05:22:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:49.796 05:22:08 -- bdev/nbd_common.sh@51 -- # local i 00:24:49.796 05:22:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:49.796 05:22:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:50.055 05:22:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:50.055 05:22:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:50.055 05:22:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:50.055 05:22:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.055 05:22:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.055 05:22:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:50.055 05:22:09 -- bdev/nbd_common.sh@41 -- # break 00:24:50.055 05:22:09 -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.055 05:22:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.055 05:22:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:50.314 05:22:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:50.314 05:22:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:50.314 05:22:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:50.314 05:22:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.314 05:22:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.314 05:22:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:50.314 05:22:09 -- bdev/nbd_common.sh@41 -- # break 00:24:50.314 05:22:09 -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.314 05:22:09 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:50.314 05:22:09 -- bdev/bdev_raid.sh@709 -- # killprocess 85887 00:24:50.314 05:22:09 -- common/autotest_common.sh@926 -- # '[' -z 85887 ']' 00:24:50.314 05:22:09 -- common/autotest_common.sh@930 -- # kill -0 85887 00:24:50.314 05:22:09 -- common/autotest_common.sh@931 -- # uname 00:24:50.314 05:22:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:50.314 05:22:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85887 00:24:50.314 killing process with pid 85887 00:24:50.314 Received shutdown signal, test time was about 60.000000 seconds 00:24:50.314 00:24:50.314 Latency(us) 00:24:50.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.314 =================================================================================================================== 00:24:50.314 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:50.314 05:22:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:50.314 05:22:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:50.314 05:22:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85887' 00:24:50.314 05:22:09 -- common/autotest_common.sh@945 -- # kill 85887 00:24:50.314 05:22:09 -- common/autotest_common.sh@950 -- # wait 85887 00:24:50.314 [2024-07-26 05:22:09.306461] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:50.573 [2024-07-26 05:22:09.620437] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:51.511 00:24:51.511 real 0m23.346s 00:24:51.511 user 0m31.396s 00:24:51.511 sys 0m2.799s 00:24:51.511 ************************************ 00:24:51.511 END TEST raid5f_rebuild_test 00:24:51.511 ************************************ 00:24:51.511 05:22:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.511 05:22:10 -- common/autotest_common.sh@10 -- # set +x 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:24:51.511 05:22:10 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:51.511 05:22:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:51.511 05:22:10 -- common/autotest_common.sh@10 -- # set +x 00:24:51.511 ************************************ 00:24:51.511 START TEST raid5f_rebuild_test_sb 00:24:51.511 ************************************ 00:24:51.511 05:22:10 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:51.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@544 -- # raid_pid=86458 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@545 -- # waitforlisten 86458 /var/tmp/spdk-raid.sock 00:24:51.511 05:22:10 -- common/autotest_common.sh@819 -- # '[' -z 86458 ']' 00:24:51.511 05:22:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:51.511 05:22:10 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:51.511 05:22:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:51.511 05:22:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:51.511 05:22:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:51.511 05:22:10 -- common/autotest_common.sh@10 -- # set +x 00:24:51.770 [2024-07-26 05:22:10.657872] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:51.771 [2024-07-26 05:22:10.658256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86458 ] 00:24:51.771 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:51.771 Zero copy mechanism will not be used. 00:24:51.771 [2024-07-26 05:22:10.824780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.030 [2024-07-26 05:22:10.986487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.030 [2024-07-26 05:22:11.129184] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:52.597 05:22:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:52.597 05:22:11 -- common/autotest_common.sh@852 -- # return 0 00:24:52.597 05:22:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:52.597 05:22:11 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:52.597 05:22:11 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:52.856 BaseBdev1_malloc 00:24:52.856 05:22:11 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:53.116 [2024-07-26 05:22:12.084108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:53.116 [2024-07-26 05:22:12.084202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.116 [2024-07-26 05:22:12.084236] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:24:53.116 [2024-07-26 05:22:12.084253] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.116 [2024-07-26 05:22:12.087012] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.116 [2024-07-26 05:22:12.087089] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:53.116 BaseBdev1 00:24:53.116 05:22:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:53.116 05:22:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:53.116 05:22:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:53.375 BaseBdev2_malloc 00:24:53.375 05:22:12 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:53.634 [2024-07-26 05:22:12.500054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:53.634 [2024-07-26 05:22:12.500115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.634 [2024-07-26 05:22:12.500150] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:24:53.634 [2024-07-26 05:22:12.500168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.634 [2024-07-26 05:22:12.502195] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.634 [2024-07-26 05:22:12.502238] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:53.634 BaseBdev2 00:24:53.634 05:22:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:53.634 05:22:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:53.634 05:22:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:53.634 BaseBdev3_malloc 00:24:53.634 05:22:12 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:53.893 [2024-07-26 05:22:12.889114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:53.893 [2024-07-26 05:22:12.889369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.893 [2024-07-26 05:22:12.889410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:24:53.893 [2024-07-26 05:22:12.889447] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.893 [2024-07-26 05:22:12.891689] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.893 [2024-07-26 05:22:12.891897] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:53.893 BaseBdev3 00:24:53.893 05:22:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:53.893 05:22:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:53.893 05:22:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:54.152 BaseBdev4_malloc 00:24:54.152 05:22:13 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:54.419 [2024-07-26 05:22:13.277610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:54.419 [2024-07-26 05:22:13.277689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.419 [2024-07-26 05:22:13.277721] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:24:54.419 [2024-07-26 05:22:13.277736] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.419 [2024-07-26 05:22:13.280202] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.419 BaseBdev4 00:24:54.419 [2024-07-26 05:22:13.280409] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:54.419 05:22:13 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:54.419 spare_malloc 00:24:54.419 05:22:13 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:54.682 spare_delay 00:24:54.682 05:22:13 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:54.940 [2024-07-26 05:22:13.862966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:54.940 [2024-07-26 05:22:13.863305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.941 [2024-07-26 05:22:13.863381] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:24:54.941 [2024-07-26 05:22:13.863644] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.941 [2024-07-26 05:22:13.865883] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.941 [2024-07-26 05:22:13.865927] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:54.941 spare 00:24:54.941 05:22:13 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:54.941 [2024-07-26 05:22:14.035053] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:54.941 [2024-07-26 05:22:14.036810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:54.941 [2024-07-26 05:22:14.036880] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:54.941 [2024-07-26 05:22:14.036945] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:54.941 [2024-07-26 05:22:14.037169] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:24:54.941 [2024-07-26 05:22:14.037190] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:54.941 [2024-07-26 05:22:14.037280] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:24:54.941 [2024-07-26 05:22:14.042694] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:24:54.941 [2024-07-26 05:22:14.042717] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:24:54.941 [2024-07-26 05:22:14.042913] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.199 05:22:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:55.199 "name": "raid_bdev1", 00:24:55.199 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:24:55.199 "strip_size_kb": 64, 00:24:55.199 "state": "online", 00:24:55.199 "raid_level": "raid5f", 00:24:55.199 "superblock": true, 00:24:55.199 "num_base_bdevs": 4, 00:24:55.199 "num_base_bdevs_discovered": 4, 00:24:55.199 "num_base_bdevs_operational": 4, 00:24:55.199 "base_bdevs_list": [ 00:24:55.199 { 00:24:55.199 "name": "BaseBdev1", 00:24:55.199 "uuid": "0324e68a-b3d4-56a8-9e70-e99f7449da42", 00:24:55.199 "is_configured": true, 00:24:55.199 "data_offset": 2048, 00:24:55.199 "data_size": 63488 00:24:55.199 }, 00:24:55.199 { 00:24:55.199 "name": "BaseBdev2", 00:24:55.199 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:24:55.199 "is_configured": true, 00:24:55.199 "data_offset": 2048, 00:24:55.199 "data_size": 63488 00:24:55.199 }, 00:24:55.199 { 00:24:55.199 "name": "BaseBdev3", 00:24:55.199 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:24:55.199 "is_configured": true, 00:24:55.199 "data_offset": 2048, 00:24:55.199 "data_size": 63488 00:24:55.199 }, 00:24:55.199 { 00:24:55.199 "name": "BaseBdev4", 00:24:55.200 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:24:55.200 "is_configured": true, 00:24:55.200 "data_offset": 2048, 00:24:55.200 "data_size": 63488 00:24:55.200 } 00:24:55.200 ] 00:24:55.200 }' 00:24:55.200 05:22:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:55.200 05:22:14 -- common/autotest_common.sh@10 -- # set +x 00:24:55.458 05:22:14 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:55.458 05:22:14 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:55.717 [2024-07-26 05:22:14.665047] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:55.717 05:22:14 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:24:55.717 05:22:14 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.717 05:22:14 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:55.976 05:22:14 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:55.976 05:22:14 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:55.976 05:22:14 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:55.976 05:22:14 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:55.976 05:22:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:55.976 05:22:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:55.976 05:22:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:55.976 05:22:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:55.976 05:22:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:55.976 05:22:14 -- bdev/nbd_common.sh@12 -- # local i 00:24:55.976 05:22:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:55.976 05:22:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:55.976 05:22:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:55.976 [2024-07-26 05:22:15.029036] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:24:55.976 /dev/nbd0 00:24:55.976 05:22:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:55.976 05:22:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:55.976 05:22:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:55.976 05:22:15 -- common/autotest_common.sh@857 -- # local i 00:24:55.976 05:22:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:55.976 05:22:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:55.976 05:22:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:55.976 05:22:15 -- common/autotest_common.sh@861 -- # break 00:24:55.977 05:22:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:55.977 05:22:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:55.977 05:22:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:55.977 1+0 records in 00:24:55.977 1+0 records out 00:24:55.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191205 s, 21.4 MB/s 00:24:55.977 05:22:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:55.977 05:22:15 -- common/autotest_common.sh@874 -- # size=4096 00:24:55.977 05:22:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:55.977 05:22:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:55.977 05:22:15 -- common/autotest_common.sh@877 -- # return 0 00:24:55.977 05:22:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:55.977 05:22:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:55.977 05:22:15 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:55.977 05:22:15 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:24:55.977 05:22:15 -- bdev/bdev_raid.sh@582 -- # echo 192 00:24:55.977 05:22:15 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:24:56.545 496+0 records in 00:24:56.545 496+0 records out 00:24:56.545 97517568 bytes (98 MB, 93 MiB) copied, 0.461084 s, 211 MB/s 00:24:56.545 05:22:15 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:56.545 05:22:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:56.545 05:22:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:56.545 05:22:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:56.545 05:22:15 -- bdev/nbd_common.sh@51 -- # local i 00:24:56.545 05:22:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:56.545 05:22:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:56.804 [2024-07-26 05:22:15.784288] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.804 05:22:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:56.804 05:22:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:56.804 05:22:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:56.804 05:22:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:56.804 05:22:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:56.804 05:22:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:56.804 05:22:15 -- bdev/nbd_common.sh@41 -- # break 00:24:56.804 05:22:15 -- bdev/nbd_common.sh@45 -- # return 0 00:24:56.804 05:22:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:57.063 [2024-07-26 05:22:16.043692] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:57.063 05:22:16 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:57.064 05:22:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:57.064 05:22:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:57.064 05:22:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:57.064 05:22:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.064 05:22:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:57.064 05:22:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.064 05:22:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.064 05:22:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.064 05:22:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.064 05:22:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.064 05:22:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.323 05:22:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.323 "name": "raid_bdev1", 00:24:57.323 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:24:57.323 "strip_size_kb": 64, 00:24:57.323 "state": "online", 00:24:57.323 "raid_level": "raid5f", 00:24:57.323 "superblock": true, 00:24:57.323 "num_base_bdevs": 4, 00:24:57.323 "num_base_bdevs_discovered": 3, 00:24:57.323 "num_base_bdevs_operational": 3, 00:24:57.323 "base_bdevs_list": [ 00:24:57.323 { 00:24:57.323 "name": null, 00:24:57.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.323 "is_configured": false, 00:24:57.323 "data_offset": 2048, 00:24:57.323 "data_size": 63488 00:24:57.323 }, 00:24:57.323 { 00:24:57.323 "name": "BaseBdev2", 00:24:57.323 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:24:57.323 "is_configured": true, 00:24:57.323 "data_offset": 2048, 00:24:57.323 "data_size": 63488 00:24:57.323 }, 00:24:57.323 { 00:24:57.323 "name": "BaseBdev3", 00:24:57.323 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:24:57.323 "is_configured": true, 00:24:57.323 "data_offset": 2048, 00:24:57.323 "data_size": 63488 00:24:57.323 }, 00:24:57.323 { 00:24:57.323 "name": "BaseBdev4", 00:24:57.323 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:24:57.323 "is_configured": true, 00:24:57.323 "data_offset": 2048, 00:24:57.323 "data_size": 63488 00:24:57.323 } 00:24:57.323 ] 00:24:57.323 }' 00:24:57.323 05:22:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.323 05:22:16 -- common/autotest_common.sh@10 -- # set +x 00:24:57.582 05:22:16 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:57.582 [2024-07-26 05:22:16.667795] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:57.582 [2024-07-26 05:22:16.667849] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:57.582 [2024-07-26 05:22:16.678063] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a300 00:24:57.582 [2024-07-26 05:22:16.685271] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:57.841 05:22:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:58.778 05:22:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:58.778 05:22:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:58.778 05:22:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:58.778 05:22:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:58.778 05:22:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:58.778 05:22:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.778 05:22:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.037 05:22:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:59.037 "name": "raid_bdev1", 00:24:59.037 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:24:59.037 "strip_size_kb": 64, 00:24:59.037 "state": "online", 00:24:59.037 "raid_level": "raid5f", 00:24:59.037 "superblock": true, 00:24:59.037 "num_base_bdevs": 4, 00:24:59.037 "num_base_bdevs_discovered": 4, 00:24:59.037 "num_base_bdevs_operational": 4, 00:24:59.037 "process": { 00:24:59.037 "type": "rebuild", 00:24:59.037 "target": "spare", 00:24:59.037 "progress": { 00:24:59.037 "blocks": 23040, 00:24:59.037 "percent": 12 00:24:59.037 } 00:24:59.037 }, 00:24:59.037 "base_bdevs_list": [ 00:24:59.037 { 00:24:59.037 "name": "spare", 00:24:59.037 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:24:59.037 "is_configured": true, 00:24:59.037 "data_offset": 2048, 00:24:59.037 "data_size": 63488 00:24:59.037 }, 00:24:59.037 { 00:24:59.037 "name": "BaseBdev2", 00:24:59.037 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:24:59.037 "is_configured": true, 00:24:59.037 "data_offset": 2048, 00:24:59.037 "data_size": 63488 00:24:59.037 }, 00:24:59.037 { 00:24:59.037 "name": "BaseBdev3", 00:24:59.037 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:24:59.037 "is_configured": true, 00:24:59.037 "data_offset": 2048, 00:24:59.037 "data_size": 63488 00:24:59.037 }, 00:24:59.037 { 00:24:59.037 "name": "BaseBdev4", 00:24:59.037 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:24:59.037 "is_configured": true, 00:24:59.037 "data_offset": 2048, 00:24:59.037 "data_size": 63488 00:24:59.037 } 00:24:59.037 ] 00:24:59.037 }' 00:24:59.037 05:22:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:59.037 05:22:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:59.037 05:22:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:59.037 05:22:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:59.037 05:22:17 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:59.297 [2024-07-26 05:22:18.174592] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:59.297 [2024-07-26 05:22:18.195405] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:59.297 [2024-07-26 05:22:18.195483] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.297 05:22:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.556 05:22:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:59.556 "name": "raid_bdev1", 00:24:59.556 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:24:59.556 "strip_size_kb": 64, 00:24:59.556 "state": "online", 00:24:59.556 "raid_level": "raid5f", 00:24:59.556 "superblock": true, 00:24:59.556 "num_base_bdevs": 4, 00:24:59.556 "num_base_bdevs_discovered": 3, 00:24:59.556 "num_base_bdevs_operational": 3, 00:24:59.556 "base_bdevs_list": [ 00:24:59.556 { 00:24:59.556 "name": null, 00:24:59.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.556 "is_configured": false, 00:24:59.556 "data_offset": 2048, 00:24:59.556 "data_size": 63488 00:24:59.556 }, 00:24:59.556 { 00:24:59.556 "name": "BaseBdev2", 00:24:59.556 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:24:59.556 "is_configured": true, 00:24:59.556 "data_offset": 2048, 00:24:59.556 "data_size": 63488 00:24:59.556 }, 00:24:59.556 { 00:24:59.556 "name": "BaseBdev3", 00:24:59.556 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:24:59.556 "is_configured": true, 00:24:59.556 "data_offset": 2048, 00:24:59.556 "data_size": 63488 00:24:59.556 }, 00:24:59.556 { 00:24:59.556 "name": "BaseBdev4", 00:24:59.556 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:24:59.556 "is_configured": true, 00:24:59.556 "data_offset": 2048, 00:24:59.556 "data_size": 63488 00:24:59.556 } 00:24:59.556 ] 00:24:59.556 }' 00:24:59.556 05:22:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:59.556 05:22:18 -- common/autotest_common.sh@10 -- # set +x 00:24:59.815 05:22:18 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:59.815 05:22:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:59.815 05:22:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:59.815 05:22:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:59.815 05:22:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:59.815 05:22:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.815 05:22:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.077 05:22:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:00.077 "name": "raid_bdev1", 00:25:00.077 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:00.077 "strip_size_kb": 64, 00:25:00.077 "state": "online", 00:25:00.077 "raid_level": "raid5f", 00:25:00.077 "superblock": true, 00:25:00.077 "num_base_bdevs": 4, 00:25:00.077 "num_base_bdevs_discovered": 3, 00:25:00.077 "num_base_bdevs_operational": 3, 00:25:00.077 "base_bdevs_list": [ 00:25:00.077 { 00:25:00.077 "name": null, 00:25:00.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.077 "is_configured": false, 00:25:00.077 "data_offset": 2048, 00:25:00.077 "data_size": 63488 00:25:00.077 }, 00:25:00.077 { 00:25:00.077 "name": "BaseBdev2", 00:25:00.077 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:00.077 "is_configured": true, 00:25:00.077 "data_offset": 2048, 00:25:00.077 "data_size": 63488 00:25:00.077 }, 00:25:00.077 { 00:25:00.077 "name": "BaseBdev3", 00:25:00.077 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:00.077 "is_configured": true, 00:25:00.077 "data_offset": 2048, 00:25:00.077 "data_size": 63488 00:25:00.077 }, 00:25:00.077 { 00:25:00.077 "name": "BaseBdev4", 00:25:00.077 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:00.077 "is_configured": true, 00:25:00.077 "data_offset": 2048, 00:25:00.077 "data_size": 63488 00:25:00.077 } 00:25:00.077 ] 00:25:00.077 }' 00:25:00.077 05:22:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:00.077 05:22:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:00.077 05:22:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:00.077 05:22:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:00.077 05:22:19 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:00.077 [2024-07-26 05:22:19.179620] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:00.077 [2024-07-26 05:22:19.179661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:00.341 [2024-07-26 05:22:19.191564] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a3d0 00:25:00.341 [2024-07-26 05:22:19.199333] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:00.341 05:22:19 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:01.280 05:22:20 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:01.280 05:22:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:01.280 05:22:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:01.280 05:22:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:01.280 05:22:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:01.280 05:22:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.280 05:22:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:01.538 "name": "raid_bdev1", 00:25:01.538 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:01.538 "strip_size_kb": 64, 00:25:01.538 "state": "online", 00:25:01.538 "raid_level": "raid5f", 00:25:01.538 "superblock": true, 00:25:01.538 "num_base_bdevs": 4, 00:25:01.538 "num_base_bdevs_discovered": 4, 00:25:01.538 "num_base_bdevs_operational": 4, 00:25:01.538 "process": { 00:25:01.538 "type": "rebuild", 00:25:01.538 "target": "spare", 00:25:01.538 "progress": { 00:25:01.538 "blocks": 23040, 00:25:01.538 "percent": 12 00:25:01.538 } 00:25:01.538 }, 00:25:01.538 "base_bdevs_list": [ 00:25:01.538 { 00:25:01.538 "name": "spare", 00:25:01.538 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:01.538 "is_configured": true, 00:25:01.538 "data_offset": 2048, 00:25:01.538 "data_size": 63488 00:25:01.538 }, 00:25:01.538 { 00:25:01.538 "name": "BaseBdev2", 00:25:01.538 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:01.538 "is_configured": true, 00:25:01.538 "data_offset": 2048, 00:25:01.538 "data_size": 63488 00:25:01.538 }, 00:25:01.538 { 00:25:01.538 "name": "BaseBdev3", 00:25:01.538 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:01.538 "is_configured": true, 00:25:01.538 "data_offset": 2048, 00:25:01.538 "data_size": 63488 00:25:01.538 }, 00:25:01.538 { 00:25:01.538 "name": "BaseBdev4", 00:25:01.538 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:01.538 "is_configured": true, 00:25:01.538 "data_offset": 2048, 00:25:01.538 "data_size": 63488 00:25:01.538 } 00:25:01.538 ] 00:25:01.538 }' 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:01.538 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@657 -- # local timeout=647 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.538 05:22:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.797 05:22:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:01.797 "name": "raid_bdev1", 00:25:01.797 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:01.797 "strip_size_kb": 64, 00:25:01.797 "state": "online", 00:25:01.797 "raid_level": "raid5f", 00:25:01.797 "superblock": true, 00:25:01.797 "num_base_bdevs": 4, 00:25:01.797 "num_base_bdevs_discovered": 4, 00:25:01.797 "num_base_bdevs_operational": 4, 00:25:01.797 "process": { 00:25:01.797 "type": "rebuild", 00:25:01.797 "target": "spare", 00:25:01.797 "progress": { 00:25:01.797 "blocks": 26880, 00:25:01.797 "percent": 14 00:25:01.797 } 00:25:01.797 }, 00:25:01.797 "base_bdevs_list": [ 00:25:01.797 { 00:25:01.797 "name": "spare", 00:25:01.797 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:01.797 "is_configured": true, 00:25:01.797 "data_offset": 2048, 00:25:01.797 "data_size": 63488 00:25:01.797 }, 00:25:01.797 { 00:25:01.797 "name": "BaseBdev2", 00:25:01.797 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:01.797 "is_configured": true, 00:25:01.797 "data_offset": 2048, 00:25:01.797 "data_size": 63488 00:25:01.797 }, 00:25:01.797 { 00:25:01.797 "name": "BaseBdev3", 00:25:01.797 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:01.797 "is_configured": true, 00:25:01.797 "data_offset": 2048, 00:25:01.797 "data_size": 63488 00:25:01.797 }, 00:25:01.797 { 00:25:01.797 "name": "BaseBdev4", 00:25:01.797 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:01.797 "is_configured": true, 00:25:01.797 "data_offset": 2048, 00:25:01.797 "data_size": 63488 00:25:01.797 } 00:25:01.797 ] 00:25:01.797 }' 00:25:01.797 05:22:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:01.797 05:22:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:01.797 05:22:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:01.797 05:22:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:01.797 05:22:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:02.734 05:22:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:02.734 05:22:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:02.734 05:22:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:02.734 05:22:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:02.734 05:22:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:02.734 05:22:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:02.734 05:22:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.734 05:22:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.993 05:22:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:02.993 "name": "raid_bdev1", 00:25:02.993 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:02.993 "strip_size_kb": 64, 00:25:02.993 "state": "online", 00:25:02.993 "raid_level": "raid5f", 00:25:02.993 "superblock": true, 00:25:02.993 "num_base_bdevs": 4, 00:25:02.993 "num_base_bdevs_discovered": 4, 00:25:02.993 "num_base_bdevs_operational": 4, 00:25:02.993 "process": { 00:25:02.993 "type": "rebuild", 00:25:02.993 "target": "spare", 00:25:02.993 "progress": { 00:25:02.993 "blocks": 51840, 00:25:02.993 "percent": 27 00:25:02.993 } 00:25:02.993 }, 00:25:02.993 "base_bdevs_list": [ 00:25:02.993 { 00:25:02.993 "name": "spare", 00:25:02.993 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:02.993 "is_configured": true, 00:25:02.993 "data_offset": 2048, 00:25:02.993 "data_size": 63488 00:25:02.993 }, 00:25:02.993 { 00:25:02.993 "name": "BaseBdev2", 00:25:02.993 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:02.993 "is_configured": true, 00:25:02.993 "data_offset": 2048, 00:25:02.993 "data_size": 63488 00:25:02.993 }, 00:25:02.993 { 00:25:02.993 "name": "BaseBdev3", 00:25:02.993 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:02.993 "is_configured": true, 00:25:02.993 "data_offset": 2048, 00:25:02.993 "data_size": 63488 00:25:02.993 }, 00:25:02.993 { 00:25:02.993 "name": "BaseBdev4", 00:25:02.993 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:02.993 "is_configured": true, 00:25:02.993 "data_offset": 2048, 00:25:02.993 "data_size": 63488 00:25:02.993 } 00:25:02.993 ] 00:25:02.993 }' 00:25:02.993 05:22:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:02.993 05:22:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:02.993 05:22:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:02.993 05:22:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:02.993 05:22:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:03.928 05:22:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:03.928 05:22:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:03.928 05:22:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:03.928 05:22:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:03.928 05:22:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:03.928 05:22:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:03.928 05:22:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.928 05:22:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.187 05:22:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:04.187 "name": "raid_bdev1", 00:25:04.187 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:04.187 "strip_size_kb": 64, 00:25:04.187 "state": "online", 00:25:04.187 "raid_level": "raid5f", 00:25:04.187 "superblock": true, 00:25:04.187 "num_base_bdevs": 4, 00:25:04.187 "num_base_bdevs_discovered": 4, 00:25:04.187 "num_base_bdevs_operational": 4, 00:25:04.187 "process": { 00:25:04.187 "type": "rebuild", 00:25:04.187 "target": "spare", 00:25:04.187 "progress": { 00:25:04.187 "blocks": 74880, 00:25:04.187 "percent": 39 00:25:04.187 } 00:25:04.187 }, 00:25:04.187 "base_bdevs_list": [ 00:25:04.187 { 00:25:04.187 "name": "spare", 00:25:04.187 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:04.187 "is_configured": true, 00:25:04.187 "data_offset": 2048, 00:25:04.187 "data_size": 63488 00:25:04.187 }, 00:25:04.187 { 00:25:04.187 "name": "BaseBdev2", 00:25:04.187 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:04.187 "is_configured": true, 00:25:04.187 "data_offset": 2048, 00:25:04.187 "data_size": 63488 00:25:04.187 }, 00:25:04.187 { 00:25:04.187 "name": "BaseBdev3", 00:25:04.187 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:04.187 "is_configured": true, 00:25:04.187 "data_offset": 2048, 00:25:04.187 "data_size": 63488 00:25:04.187 }, 00:25:04.187 { 00:25:04.187 "name": "BaseBdev4", 00:25:04.187 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:04.187 "is_configured": true, 00:25:04.187 "data_offset": 2048, 00:25:04.187 "data_size": 63488 00:25:04.187 } 00:25:04.187 ] 00:25:04.187 }' 00:25:04.187 05:22:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:04.187 05:22:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:04.187 05:22:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:04.187 05:22:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:04.187 05:22:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:05.563 "name": "raid_bdev1", 00:25:05.563 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:05.563 "strip_size_kb": 64, 00:25:05.563 "state": "online", 00:25:05.563 "raid_level": "raid5f", 00:25:05.563 "superblock": true, 00:25:05.563 "num_base_bdevs": 4, 00:25:05.563 "num_base_bdevs_discovered": 4, 00:25:05.563 "num_base_bdevs_operational": 4, 00:25:05.563 "process": { 00:25:05.563 "type": "rebuild", 00:25:05.563 "target": "spare", 00:25:05.563 "progress": { 00:25:05.563 "blocks": 99840, 00:25:05.563 "percent": 52 00:25:05.563 } 00:25:05.563 }, 00:25:05.563 "base_bdevs_list": [ 00:25:05.563 { 00:25:05.563 "name": "spare", 00:25:05.563 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:05.563 "is_configured": true, 00:25:05.563 "data_offset": 2048, 00:25:05.563 "data_size": 63488 00:25:05.563 }, 00:25:05.563 { 00:25:05.563 "name": "BaseBdev2", 00:25:05.563 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:05.563 "is_configured": true, 00:25:05.563 "data_offset": 2048, 00:25:05.563 "data_size": 63488 00:25:05.563 }, 00:25:05.563 { 00:25:05.563 "name": "BaseBdev3", 00:25:05.563 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:05.563 "is_configured": true, 00:25:05.563 "data_offset": 2048, 00:25:05.563 "data_size": 63488 00:25:05.563 }, 00:25:05.563 { 00:25:05.563 "name": "BaseBdev4", 00:25:05.563 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:05.563 "is_configured": true, 00:25:05.563 "data_offset": 2048, 00:25:05.563 "data_size": 63488 00:25:05.563 } 00:25:05.563 ] 00:25:05.563 }' 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:05.563 05:22:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:06.499 05:22:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:06.499 05:22:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:06.499 05:22:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:06.499 05:22:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:06.499 05:22:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:06.499 05:22:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:06.499 05:22:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.499 05:22:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.758 05:22:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:06.758 "name": "raid_bdev1", 00:25:06.758 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:06.758 "strip_size_kb": 64, 00:25:06.758 "state": "online", 00:25:06.758 "raid_level": "raid5f", 00:25:06.758 "superblock": true, 00:25:06.758 "num_base_bdevs": 4, 00:25:06.758 "num_base_bdevs_discovered": 4, 00:25:06.758 "num_base_bdevs_operational": 4, 00:25:06.758 "process": { 00:25:06.758 "type": "rebuild", 00:25:06.758 "target": "spare", 00:25:06.758 "progress": { 00:25:06.758 "blocks": 122880, 00:25:06.758 "percent": 64 00:25:06.758 } 00:25:06.758 }, 00:25:06.758 "base_bdevs_list": [ 00:25:06.758 { 00:25:06.758 "name": "spare", 00:25:06.758 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:06.758 "is_configured": true, 00:25:06.758 "data_offset": 2048, 00:25:06.758 "data_size": 63488 00:25:06.758 }, 00:25:06.758 { 00:25:06.758 "name": "BaseBdev2", 00:25:06.758 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:06.758 "is_configured": true, 00:25:06.758 "data_offset": 2048, 00:25:06.758 "data_size": 63488 00:25:06.758 }, 00:25:06.758 { 00:25:06.758 "name": "BaseBdev3", 00:25:06.758 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:06.758 "is_configured": true, 00:25:06.758 "data_offset": 2048, 00:25:06.758 "data_size": 63488 00:25:06.758 }, 00:25:06.758 { 00:25:06.759 "name": "BaseBdev4", 00:25:06.759 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:06.759 "is_configured": true, 00:25:06.759 "data_offset": 2048, 00:25:06.759 "data_size": 63488 00:25:06.759 } 00:25:06.759 ] 00:25:06.759 }' 00:25:06.759 05:22:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:06.759 05:22:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:06.759 05:22:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:06.759 05:22:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:06.759 05:22:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:07.695 05:22:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:07.695 05:22:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:07.695 05:22:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:07.695 05:22:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:07.695 05:22:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:07.695 05:22:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:07.695 05:22:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.695 05:22:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.954 05:22:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:07.954 "name": "raid_bdev1", 00:25:07.954 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:07.954 "strip_size_kb": 64, 00:25:07.954 "state": "online", 00:25:07.954 "raid_level": "raid5f", 00:25:07.954 "superblock": true, 00:25:07.954 "num_base_bdevs": 4, 00:25:07.954 "num_base_bdevs_discovered": 4, 00:25:07.954 "num_base_bdevs_operational": 4, 00:25:07.954 "process": { 00:25:07.954 "type": "rebuild", 00:25:07.954 "target": "spare", 00:25:07.954 "progress": { 00:25:07.954 "blocks": 147840, 00:25:07.954 "percent": 77 00:25:07.954 } 00:25:07.954 }, 00:25:07.954 "base_bdevs_list": [ 00:25:07.954 { 00:25:07.954 "name": "spare", 00:25:07.954 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:07.954 "is_configured": true, 00:25:07.954 "data_offset": 2048, 00:25:07.954 "data_size": 63488 00:25:07.954 }, 00:25:07.954 { 00:25:07.954 "name": "BaseBdev2", 00:25:07.954 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:07.954 "is_configured": true, 00:25:07.954 "data_offset": 2048, 00:25:07.954 "data_size": 63488 00:25:07.954 }, 00:25:07.954 { 00:25:07.954 "name": "BaseBdev3", 00:25:07.954 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:07.954 "is_configured": true, 00:25:07.954 "data_offset": 2048, 00:25:07.954 "data_size": 63488 00:25:07.954 }, 00:25:07.954 { 00:25:07.954 "name": "BaseBdev4", 00:25:07.954 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:07.954 "is_configured": true, 00:25:07.954 "data_offset": 2048, 00:25:07.954 "data_size": 63488 00:25:07.954 } 00:25:07.954 ] 00:25:07.954 }' 00:25:07.954 05:22:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:07.954 05:22:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:07.954 05:22:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:07.954 05:22:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:07.954 05:22:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:08.891 05:22:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:08.891 05:22:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:08.891 05:22:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:08.891 05:22:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:08.891 05:22:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:08.891 05:22:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:08.891 05:22:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.891 05:22:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.150 05:22:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:09.150 "name": "raid_bdev1", 00:25:09.150 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:09.150 "strip_size_kb": 64, 00:25:09.150 "state": "online", 00:25:09.150 "raid_level": "raid5f", 00:25:09.150 "superblock": true, 00:25:09.150 "num_base_bdevs": 4, 00:25:09.150 "num_base_bdevs_discovered": 4, 00:25:09.150 "num_base_bdevs_operational": 4, 00:25:09.150 "process": { 00:25:09.150 "type": "rebuild", 00:25:09.150 "target": "spare", 00:25:09.150 "progress": { 00:25:09.150 "blocks": 170880, 00:25:09.150 "percent": 89 00:25:09.150 } 00:25:09.150 }, 00:25:09.150 "base_bdevs_list": [ 00:25:09.150 { 00:25:09.150 "name": "spare", 00:25:09.150 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:09.150 "is_configured": true, 00:25:09.150 "data_offset": 2048, 00:25:09.150 "data_size": 63488 00:25:09.150 }, 00:25:09.150 { 00:25:09.150 "name": "BaseBdev2", 00:25:09.150 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:09.150 "is_configured": true, 00:25:09.150 "data_offset": 2048, 00:25:09.150 "data_size": 63488 00:25:09.150 }, 00:25:09.150 { 00:25:09.150 "name": "BaseBdev3", 00:25:09.150 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:09.150 "is_configured": true, 00:25:09.150 "data_offset": 2048, 00:25:09.150 "data_size": 63488 00:25:09.150 }, 00:25:09.150 { 00:25:09.150 "name": "BaseBdev4", 00:25:09.150 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:09.150 "is_configured": true, 00:25:09.150 "data_offset": 2048, 00:25:09.150 "data_size": 63488 00:25:09.150 } 00:25:09.150 ] 00:25:09.150 }' 00:25:09.150 05:22:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:09.150 05:22:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:09.150 05:22:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:09.150 05:22:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:09.150 05:22:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:10.527 05:22:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:10.527 05:22:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:10.527 05:22:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:10.527 05:22:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:10.527 05:22:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:10.527 05:22:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:10.527 05:22:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.527 05:22:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.527 [2024-07-26 05:22:29.264921] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:10.527 [2024-07-26 05:22:29.264995] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:10.527 [2024-07-26 05:22:29.265177] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.527 05:22:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:10.527 "name": "raid_bdev1", 00:25:10.527 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:10.527 "strip_size_kb": 64, 00:25:10.527 "state": "online", 00:25:10.527 "raid_level": "raid5f", 00:25:10.527 "superblock": true, 00:25:10.527 "num_base_bdevs": 4, 00:25:10.527 "num_base_bdevs_discovered": 4, 00:25:10.527 "num_base_bdevs_operational": 4, 00:25:10.527 "base_bdevs_list": [ 00:25:10.527 { 00:25:10.527 "name": "spare", 00:25:10.527 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:10.527 "is_configured": true, 00:25:10.527 "data_offset": 2048, 00:25:10.527 "data_size": 63488 00:25:10.527 }, 00:25:10.527 { 00:25:10.527 "name": "BaseBdev2", 00:25:10.527 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:10.527 "is_configured": true, 00:25:10.527 "data_offset": 2048, 00:25:10.527 "data_size": 63488 00:25:10.527 }, 00:25:10.527 { 00:25:10.527 "name": "BaseBdev3", 00:25:10.527 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:10.527 "is_configured": true, 00:25:10.527 "data_offset": 2048, 00:25:10.527 "data_size": 63488 00:25:10.527 }, 00:25:10.528 { 00:25:10.528 "name": "BaseBdev4", 00:25:10.528 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:10.528 "is_configured": true, 00:25:10.528 "data_offset": 2048, 00:25:10.528 "data_size": 63488 00:25:10.528 } 00:25:10.528 ] 00:25:10.528 }' 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@660 -- # break 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.528 05:22:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:10.787 "name": "raid_bdev1", 00:25:10.787 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:10.787 "strip_size_kb": 64, 00:25:10.787 "state": "online", 00:25:10.787 "raid_level": "raid5f", 00:25:10.787 "superblock": true, 00:25:10.787 "num_base_bdevs": 4, 00:25:10.787 "num_base_bdevs_discovered": 4, 00:25:10.787 "num_base_bdevs_operational": 4, 00:25:10.787 "base_bdevs_list": [ 00:25:10.787 { 00:25:10.787 "name": "spare", 00:25:10.787 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:10.787 "is_configured": true, 00:25:10.787 "data_offset": 2048, 00:25:10.787 "data_size": 63488 00:25:10.787 }, 00:25:10.787 { 00:25:10.787 "name": "BaseBdev2", 00:25:10.787 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:10.787 "is_configured": true, 00:25:10.787 "data_offset": 2048, 00:25:10.787 "data_size": 63488 00:25:10.787 }, 00:25:10.787 { 00:25:10.787 "name": "BaseBdev3", 00:25:10.787 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:10.787 "is_configured": true, 00:25:10.787 "data_offset": 2048, 00:25:10.787 "data_size": 63488 00:25:10.787 }, 00:25:10.787 { 00:25:10.787 "name": "BaseBdev4", 00:25:10.787 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:10.787 "is_configured": true, 00:25:10.787 "data_offset": 2048, 00:25:10.787 "data_size": 63488 00:25:10.787 } 00:25:10.787 ] 00:25:10.787 }' 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.787 05:22:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.046 05:22:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:11.046 "name": "raid_bdev1", 00:25:11.046 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:11.046 "strip_size_kb": 64, 00:25:11.046 "state": "online", 00:25:11.046 "raid_level": "raid5f", 00:25:11.046 "superblock": true, 00:25:11.046 "num_base_bdevs": 4, 00:25:11.046 "num_base_bdevs_discovered": 4, 00:25:11.046 "num_base_bdevs_operational": 4, 00:25:11.046 "base_bdevs_list": [ 00:25:11.046 { 00:25:11.046 "name": "spare", 00:25:11.046 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:11.046 "is_configured": true, 00:25:11.046 "data_offset": 2048, 00:25:11.046 "data_size": 63488 00:25:11.046 }, 00:25:11.046 { 00:25:11.046 "name": "BaseBdev2", 00:25:11.046 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:11.046 "is_configured": true, 00:25:11.046 "data_offset": 2048, 00:25:11.046 "data_size": 63488 00:25:11.046 }, 00:25:11.046 { 00:25:11.046 "name": "BaseBdev3", 00:25:11.046 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:11.046 "is_configured": true, 00:25:11.046 "data_offset": 2048, 00:25:11.046 "data_size": 63488 00:25:11.046 }, 00:25:11.046 { 00:25:11.046 "name": "BaseBdev4", 00:25:11.046 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:11.046 "is_configured": true, 00:25:11.046 "data_offset": 2048, 00:25:11.046 "data_size": 63488 00:25:11.046 } 00:25:11.046 ] 00:25:11.046 }' 00:25:11.046 05:22:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:11.046 05:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:11.305 05:22:30 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:11.565 [2024-07-26 05:22:30.498603] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:11.565 [2024-07-26 05:22:30.498841] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:11.565 [2024-07-26 05:22:30.499101] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:11.565 [2024-07-26 05:22:30.499361] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:11.565 [2024-07-26 05:22:30.499387] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:25:11.565 05:22:30 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:11.565 05:22:30 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.824 05:22:30 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:11.824 05:22:30 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:11.824 05:22:30 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@12 -- # local i 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:11.824 /dev/nbd0 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:11.824 05:22:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:11.824 05:22:30 -- common/autotest_common.sh@857 -- # local i 00:25:11.824 05:22:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:11.824 05:22:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:11.824 05:22:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:11.824 05:22:30 -- common/autotest_common.sh@861 -- # break 00:25:11.824 05:22:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:11.824 05:22:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:11.824 05:22:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:11.824 1+0 records in 00:25:11.824 1+0 records out 00:25:11.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288375 s, 14.2 MB/s 00:25:11.824 05:22:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:11.824 05:22:30 -- common/autotest_common.sh@874 -- # size=4096 00:25:11.824 05:22:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:11.824 05:22:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:11.824 05:22:30 -- common/autotest_common.sh@877 -- # return 0 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:11.824 05:22:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:12.083 /dev/nbd1 00:25:12.083 05:22:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:12.083 05:22:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:12.083 05:22:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:12.083 05:22:31 -- common/autotest_common.sh@857 -- # local i 00:25:12.083 05:22:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:12.083 05:22:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:12.083 05:22:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:12.083 05:22:31 -- common/autotest_common.sh@861 -- # break 00:25:12.083 05:22:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:12.083 05:22:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:12.083 05:22:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:12.083 1+0 records in 00:25:12.083 1+0 records out 00:25:12.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294764 s, 13.9 MB/s 00:25:12.083 05:22:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:12.083 05:22:31 -- common/autotest_common.sh@874 -- # size=4096 00:25:12.083 05:22:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:12.083 05:22:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:12.083 05:22:31 -- common/autotest_common.sh@877 -- # return 0 00:25:12.083 05:22:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:12.083 05:22:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:12.083 05:22:31 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:12.343 05:22:31 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@51 -- # local i 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@41 -- # break 00:25:12.343 05:22:31 -- bdev/nbd_common.sh@45 -- # return 0 00:25:12.601 05:22:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:12.602 05:22:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:12.860 05:22:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:12.860 05:22:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:12.860 05:22:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:12.860 05:22:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:12.860 05:22:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:12.860 05:22:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:12.860 05:22:31 -- bdev/nbd_common.sh@41 -- # break 00:25:12.860 05:22:31 -- bdev/nbd_common.sh@45 -- # return 0 00:25:12.860 05:22:31 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:12.860 05:22:31 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:12.860 05:22:31 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:12.860 05:22:31 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:13.119 05:22:31 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:13.119 [2024-07-26 05:22:32.214454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:13.119 [2024-07-26 05:22:32.214689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.119 [2024-07-26 05:22:32.214736] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:25:13.119 [2024-07-26 05:22:32.214752] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.119 [2024-07-26 05:22:32.217092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.119 [2024-07-26 05:22:32.217133] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:13.119 [2024-07-26 05:22:32.217229] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:13.119 [2024-07-26 05:22:32.217283] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:13.119 BaseBdev1 00:25:13.378 05:22:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:13.378 05:22:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:25:13.378 05:22:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:25:13.378 05:22:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:13.638 [2024-07-26 05:22:32.618534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:13.638 [2024-07-26 05:22:32.618590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.638 [2024-07-26 05:22:32.618630] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:25:13.638 [2024-07-26 05:22:32.618645] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.638 [2024-07-26 05:22:32.619090] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.638 [2024-07-26 05:22:32.619113] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:13.638 [2024-07-26 05:22:32.619213] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:25:13.638 [2024-07-26 05:22:32.619228] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:25:13.638 [2024-07-26 05:22:32.619240] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:13.638 [2024-07-26 05:22:32.619260] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:25:13.638 [2024-07-26 05:22:32.619331] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:13.638 BaseBdev2 00:25:13.638 05:22:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:13.638 05:22:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:13.638 05:22:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:13.897 05:22:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:13.897 [2024-07-26 05:22:32.986666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:13.897 [2024-07-26 05:22:32.986745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.897 [2024-07-26 05:22:32.986775] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:25:13.897 [2024-07-26 05:22:32.986790] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.897 [2024-07-26 05:22:32.987329] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.897 [2024-07-26 05:22:32.987379] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:13.897 [2024-07-26 05:22:32.987482] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:13.897 [2024-07-26 05:22:32.987515] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:13.897 BaseBdev3 00:25:13.897 05:22:33 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:13.897 05:22:33 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:25:13.897 05:22:33 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:25:14.155 05:22:33 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:14.413 [2024-07-26 05:22:33.398735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:14.413 [2024-07-26 05:22:33.398801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.413 [2024-07-26 05:22:33.398831] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:25:14.413 [2024-07-26 05:22:33.398845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.413 [2024-07-26 05:22:33.399392] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.413 [2024-07-26 05:22:33.399434] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:14.413 [2024-07-26 05:22:33.399531] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:25:14.413 [2024-07-26 05:22:33.399560] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:14.413 BaseBdev4 00:25:14.413 05:22:33 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:14.671 05:22:33 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:14.671 [2024-07-26 05:22:33.766800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:14.671 [2024-07-26 05:22:33.767065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.671 [2024-07-26 05:22:33.767110] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:25:14.671 [2024-07-26 05:22:33.767128] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.671 [2024-07-26 05:22:33.767652] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.671 [2024-07-26 05:22:33.767683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:14.671 [2024-07-26 05:22:33.767773] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:14.671 [2024-07-26 05:22:33.767827] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:14.671 spare 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.929 [2024-07-26 05:22:33.867962] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c080 00:25:14.929 [2024-07-26 05:22:33.867993] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:14.929 [2024-07-26 05:22:33.868151] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000048a80 00:25:14.929 [2024-07-26 05:22:33.873382] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c080 00:25:14.929 [2024-07-26 05:22:33.873541] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c080 00:25:14.929 [2024-07-26 05:22:33.873725] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:14.929 05:22:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:14.929 "name": "raid_bdev1", 00:25:14.929 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:14.929 "strip_size_kb": 64, 00:25:14.929 "state": "online", 00:25:14.929 "raid_level": "raid5f", 00:25:14.929 "superblock": true, 00:25:14.929 "num_base_bdevs": 4, 00:25:14.929 "num_base_bdevs_discovered": 4, 00:25:14.929 "num_base_bdevs_operational": 4, 00:25:14.930 "base_bdevs_list": [ 00:25:14.930 { 00:25:14.930 "name": "spare", 00:25:14.930 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:14.930 "is_configured": true, 00:25:14.930 "data_offset": 2048, 00:25:14.930 "data_size": 63488 00:25:14.930 }, 00:25:14.930 { 00:25:14.930 "name": "BaseBdev2", 00:25:14.930 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:14.930 "is_configured": true, 00:25:14.930 "data_offset": 2048, 00:25:14.930 "data_size": 63488 00:25:14.930 }, 00:25:14.930 { 00:25:14.930 "name": "BaseBdev3", 00:25:14.930 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:14.930 "is_configured": true, 00:25:14.930 "data_offset": 2048, 00:25:14.930 "data_size": 63488 00:25:14.930 }, 00:25:14.930 { 00:25:14.930 "name": "BaseBdev4", 00:25:14.930 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:14.930 "is_configured": true, 00:25:14.930 "data_offset": 2048, 00:25:14.930 "data_size": 63488 00:25:14.930 } 00:25:14.930 ] 00:25:14.930 }' 00:25:14.930 05:22:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:14.930 05:22:33 -- common/autotest_common.sh@10 -- # set +x 00:25:15.188 05:22:34 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:15.188 05:22:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:15.188 05:22:34 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:15.188 05:22:34 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:15.188 05:22:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:15.188 05:22:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.188 05:22:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.460 05:22:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:15.460 "name": "raid_bdev1", 00:25:15.460 "uuid": "08a01369-f410-47be-a013-fc7dd0947138", 00:25:15.460 "strip_size_kb": 64, 00:25:15.460 "state": "online", 00:25:15.460 "raid_level": "raid5f", 00:25:15.460 "superblock": true, 00:25:15.460 "num_base_bdevs": 4, 00:25:15.460 "num_base_bdevs_discovered": 4, 00:25:15.460 "num_base_bdevs_operational": 4, 00:25:15.460 "base_bdevs_list": [ 00:25:15.460 { 00:25:15.460 "name": "spare", 00:25:15.460 "uuid": "a004164d-2bb2-5ad9-a12b-0c7df0b5661a", 00:25:15.460 "is_configured": true, 00:25:15.460 "data_offset": 2048, 00:25:15.460 "data_size": 63488 00:25:15.460 }, 00:25:15.460 { 00:25:15.460 "name": "BaseBdev2", 00:25:15.460 "uuid": "81de5d9c-523f-5869-86f0-89771c4c6e70", 00:25:15.460 "is_configured": true, 00:25:15.460 "data_offset": 2048, 00:25:15.460 "data_size": 63488 00:25:15.460 }, 00:25:15.460 { 00:25:15.460 "name": "BaseBdev3", 00:25:15.460 "uuid": "07eac370-8c4d-5bbf-aa9c-52eb7ba10ab6", 00:25:15.460 "is_configured": true, 00:25:15.460 "data_offset": 2048, 00:25:15.460 "data_size": 63488 00:25:15.460 }, 00:25:15.460 { 00:25:15.460 "name": "BaseBdev4", 00:25:15.460 "uuid": "d08c79a6-ad7b-5691-9ce6-62d0ee0c968e", 00:25:15.460 "is_configured": true, 00:25:15.460 "data_offset": 2048, 00:25:15.460 "data_size": 63488 00:25:15.460 } 00:25:15.460 ] 00:25:15.460 }' 00:25:15.460 05:22:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:15.460 05:22:34 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:15.460 05:22:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:15.460 05:22:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:15.460 05:22:34 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:15.460 05:22:34 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.740 05:22:34 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.740 05:22:34 -- bdev/bdev_raid.sh@709 -- # killprocess 86458 00:25:15.740 05:22:34 -- common/autotest_common.sh@926 -- # '[' -z 86458 ']' 00:25:15.740 05:22:34 -- common/autotest_common.sh@930 -- # kill -0 86458 00:25:15.740 05:22:34 -- common/autotest_common.sh@931 -- # uname 00:25:15.740 05:22:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:15.740 05:22:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86458 00:25:15.740 killing process with pid 86458 00:25:15.740 Received shutdown signal, test time was about 60.000000 seconds 00:25:15.740 00:25:15.740 Latency(us) 00:25:15.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.740 =================================================================================================================== 00:25:15.740 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:15.740 05:22:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:15.740 05:22:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:15.740 05:22:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86458' 00:25:15.740 05:22:34 -- common/autotest_common.sh@945 -- # kill 86458 00:25:15.740 [2024-07-26 05:22:34.756156] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:15.740 05:22:34 -- common/autotest_common.sh@950 -- # wait 86458 00:25:15.740 [2024-07-26 05:22:34.756247] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.740 [2024-07-26 05:22:34.756341] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.740 [2024-07-26 05:22:34.756375] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c080 name raid_bdev1, state offline 00:25:15.998 [2024-07-26 05:22:35.069300] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:16.934 05:22:35 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:16.934 00:25:16.934 real 0m25.395s 00:25:16.934 user 0m36.248s 00:25:16.934 sys 0m3.039s 00:25:16.934 ************************************ 00:25:16.934 END TEST raid5f_rebuild_test_sb 00:25:16.934 ************************************ 00:25:16.934 05:22:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.934 05:22:35 -- common/autotest_common.sh@10 -- # set +x 00:25:16.934 05:22:36 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:25:16.934 ************************************ 00:25:16.934 END TEST bdev_raid 00:25:16.934 ************************************ 00:25:16.934 00:25:16.934 real 10m32.046s 00:25:16.934 user 16m18.367s 00:25:16.934 sys 1m34.800s 00:25:16.934 05:22:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.934 05:22:36 -- common/autotest_common.sh@10 -- # set +x 00:25:17.193 05:22:36 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:25:17.193 05:22:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:17.193 05:22:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:17.193 05:22:36 -- common/autotest_common.sh@10 -- # set +x 00:25:17.193 ************************************ 00:25:17.193 START TEST bdevperf_config 00:25:17.193 ************************************ 00:25:17.193 05:22:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:25:17.193 * Looking for test storage... 00:25:17.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:25:17.193 05:22:36 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:25:17.193 05:22:36 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:25:17.193 05:22:36 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:25:17.193 05:22:36 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:17.193 05:22:36 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:17.193 05:22:36 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:25:17.193 05:22:36 -- bdevperf/common.sh@8 -- # local job_section=global 00:25:17.193 05:22:36 -- bdevperf/common.sh@9 -- # local rw=read 00:25:17.193 05:22:36 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:17.193 05:22:36 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:25:17.193 05:22:36 -- bdevperf/common.sh@13 -- # cat 00:25:17.193 05:22:36 -- bdevperf/common.sh@18 -- # job='[global]' 00:25:17.193 00:25:17.193 05:22:36 -- bdevperf/common.sh@19 -- # echo 00:25:17.193 05:22:36 -- bdevperf/common.sh@20 -- # cat 00:25:17.193 05:22:36 -- bdevperf/test_config.sh@18 -- # create_job job0 00:25:17.193 05:22:36 -- bdevperf/common.sh@8 -- # local job_section=job0 00:25:17.193 05:22:36 -- bdevperf/common.sh@9 -- # local rw= 00:25:17.193 05:22:36 -- bdevperf/common.sh@10 -- # local filename= 00:25:17.193 05:22:36 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:25:17.193 00:25:17.193 05:22:36 -- bdevperf/common.sh@18 -- # job='[job0]' 00:25:17.193 05:22:36 -- bdevperf/common.sh@19 -- # echo 00:25:17.193 05:22:36 -- bdevperf/common.sh@20 -- # cat 00:25:17.193 05:22:36 -- bdevperf/test_config.sh@19 -- # create_job job1 00:25:17.193 05:22:36 -- bdevperf/common.sh@8 -- # local job_section=job1 00:25:17.193 05:22:36 -- bdevperf/common.sh@9 -- # local rw= 00:25:17.193 05:22:36 -- bdevperf/common.sh@10 -- # local filename= 00:25:17.193 00:25:17.193 05:22:36 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:25:17.193 05:22:36 -- bdevperf/common.sh@18 -- # job='[job1]' 00:25:17.193 05:22:36 -- bdevperf/common.sh@19 -- # echo 00:25:17.193 05:22:36 -- bdevperf/common.sh@20 -- # cat 00:25:17.193 05:22:36 -- bdevperf/test_config.sh@20 -- # create_job job2 00:25:17.193 05:22:36 -- bdevperf/common.sh@8 -- # local job_section=job2 00:25:17.193 05:22:36 -- bdevperf/common.sh@9 -- # local rw= 00:25:17.193 05:22:36 -- bdevperf/common.sh@10 -- # local filename= 00:25:17.193 05:22:36 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:25:17.193 05:22:36 -- bdevperf/common.sh@18 -- # job='[job2]' 00:25:17.193 05:22:36 -- bdevperf/common.sh@19 -- # echo 00:25:17.193 05:22:36 -- bdevperf/common.sh@20 -- # cat 00:25:17.193 00:25:17.193 05:22:36 -- bdevperf/test_config.sh@21 -- # create_job job3 00:25:17.193 05:22:36 -- bdevperf/common.sh@8 -- # local job_section=job3 00:25:17.193 05:22:36 -- bdevperf/common.sh@9 -- # local rw= 00:25:17.193 05:22:36 -- bdevperf/common.sh@10 -- # local filename= 00:25:17.193 00:25:17.193 05:22:36 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:25:17.193 05:22:36 -- bdevperf/common.sh@18 -- # job='[job3]' 00:25:17.193 05:22:36 -- bdevperf/common.sh@19 -- # echo 00:25:17.193 05:22:36 -- bdevperf/common.sh@20 -- # cat 00:25:17.193 05:22:36 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:21.384 05:22:40 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-26 05:22:36.251148] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:21.385 [2024-07-26 05:22:36.251894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87141 ] 00:25:21.385 Using job config with 4 jobs 00:25:21.385 [2024-07-26 05:22:36.421338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.385 [2024-07-26 05:22:36.587438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.385 cpumask for '\''job0'\'' is too big 00:25:21.385 cpumask for '\''job1'\'' is too big 00:25:21.385 cpumask for '\''job2'\'' is too big 00:25:21.385 cpumask for '\''job3'\'' is too big 00:25:21.385 Running I/O for 2 seconds... 00:25:21.385 00:25:21.385 Latency(us) 00:25:21.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.01 31139.97 30.41 0.00 0.00 8211.54 1452.22 12630.57 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.02 31114.80 30.39 0.00 0.00 8203.31 1407.53 11260.28 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.02 31093.85 30.37 0.00 0.00 8194.19 1422.43 10426.18 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.02 31072.95 30.34 0.00 0.00 8185.74 1429.88 10307.03 00:25:21.385 =================================================================================================================== 00:25:21.385 Total : 124421.57 121.51 0.00 0.00 8198.70 1407.53 12630.57' 00:25:21.385 05:22:40 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-26 05:22:36.251148] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:21.385 [2024-07-26 05:22:36.251894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87141 ] 00:25:21.385 Using job config with 4 jobs 00:25:21.385 [2024-07-26 05:22:36.421338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.385 [2024-07-26 05:22:36.587438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.385 cpumask for '\''job0'\'' is too big 00:25:21.385 cpumask for '\''job1'\'' is too big 00:25:21.385 cpumask for '\''job2'\'' is too big 00:25:21.385 cpumask for '\''job3'\'' is too big 00:25:21.385 Running I/O for 2 seconds... 00:25:21.385 00:25:21.385 Latency(us) 00:25:21.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.01 31139.97 30.41 0.00 0.00 8211.54 1452.22 12630.57 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.02 31114.80 30.39 0.00 0.00 8203.31 1407.53 11260.28 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.02 31093.85 30.37 0.00 0.00 8194.19 1422.43 10426.18 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.02 31072.95 30.34 0.00 0.00 8185.74 1429.88 10307.03 00:25:21.385 =================================================================================================================== 00:25:21.385 Total : 124421.57 121.51 0.00 0.00 8198.70 1407.53 12630.57' 00:25:21.385 05:22:40 -- bdevperf/common.sh@32 -- # echo '[2024-07-26 05:22:36.251148] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:21.385 [2024-07-26 05:22:36.251894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87141 ] 00:25:21.385 Using job config with 4 jobs 00:25:21.385 [2024-07-26 05:22:36.421338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.385 [2024-07-26 05:22:36.587438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.385 cpumask for '\''job0'\'' is too big 00:25:21.385 cpumask for '\''job1'\'' is too big 00:25:21.385 cpumask for '\''job2'\'' is too big 00:25:21.385 cpumask for '\''job3'\'' is too big 00:25:21.385 Running I/O for 2 seconds... 00:25:21.385 00:25:21.385 Latency(us) 00:25:21.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.01 31139.97 30.41 0.00 0.00 8211.54 1452.22 12630.57 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.02 31114.80 30.39 0.00 0.00 8203.31 1407.53 11260.28 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.02 31093.85 30.37 0.00 0.00 8194.19 1422.43 10426.18 00:25:21.385 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:21.385 Malloc0 : 2.02 31072.95 30.34 0.00 0.00 8185.74 1429.88 10307.03 00:25:21.385 =================================================================================================================== 00:25:21.385 Total : 124421.57 121.51 0.00 0.00 8198.70 1407.53 12630.57' 00:25:21.385 05:22:40 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:25:21.385 05:22:40 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:25:21.385 05:22:40 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:25:21.385 05:22:40 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:21.385 [2024-07-26 05:22:40.121533] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:21.385 [2024-07-26 05:22:40.121711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87191 ] 00:25:21.385 [2024-07-26 05:22:40.289586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.385 [2024-07-26 05:22:40.450361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.953 cpumask for 'job0' is too big 00:25:21.953 cpumask for 'job1' is too big 00:25:21.953 cpumask for 'job2' is too big 00:25:21.953 cpumask for 'job3' is too big 00:25:25.240 05:22:43 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:25:25.240 Running I/O for 2 seconds... 00:25:25.240 00:25:25.240 Latency(us) 00:25:25.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.240 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:25.240 Malloc0 : 2.01 31340.84 30.61 0.00 0.00 8163.80 1467.11 12690.15 00:25:25.240 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:25.240 Malloc0 : 2.02 31353.79 30.62 0.00 0.00 8147.22 1422.43 11260.28 00:25:25.240 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:25.240 Malloc0 : 2.02 31333.20 30.60 0.00 0.00 8137.92 1437.32 10366.60 00:25:25.240 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:25.240 Malloc0 : 2.02 31312.91 30.58 0.00 0.00 8128.26 1444.77 10187.87 00:25:25.240 =================================================================================================================== 00:25:25.240 Total : 125340.74 122.40 0.00 0.00 8144.28 1422.43 12690.15' 00:25:25.240 05:22:43 -- bdevperf/test_config.sh@27 -- # cleanup 00:25:25.240 05:22:43 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:25.240 05:22:43 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:25:25.240 05:22:43 -- bdevperf/common.sh@8 -- # local job_section=job0 00:25:25.240 05:22:43 -- bdevperf/common.sh@9 -- # local rw=write 00:25:25.240 05:22:43 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:25.240 05:22:43 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:25:25.240 00:25:25.240 05:22:43 -- bdevperf/common.sh@18 -- # job='[job0]' 00:25:25.240 05:22:43 -- bdevperf/common.sh@19 -- # echo 00:25:25.240 05:22:43 -- bdevperf/common.sh@20 -- # cat 00:25:25.240 05:22:43 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:25:25.240 05:22:43 -- bdevperf/common.sh@8 -- # local job_section=job1 00:25:25.240 05:22:43 -- bdevperf/common.sh@9 -- # local rw=write 00:25:25.240 05:22:43 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:25.240 05:22:43 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:25:25.240 00:25:25.240 05:22:43 -- bdevperf/common.sh@18 -- # job='[job1]' 00:25:25.240 05:22:43 -- bdevperf/common.sh@19 -- # echo 00:25:25.240 05:22:43 -- bdevperf/common.sh@20 -- # cat 00:25:25.240 05:22:43 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:25:25.240 05:22:43 -- bdevperf/common.sh@8 -- # local job_section=job2 00:25:25.240 05:22:43 -- bdevperf/common.sh@9 -- # local rw=write 00:25:25.240 05:22:43 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:25.240 05:22:43 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:25:25.240 05:22:43 -- bdevperf/common.sh@18 -- # job='[job2]' 00:25:25.240 00:25:25.240 05:22:43 -- bdevperf/common.sh@19 -- # echo 00:25:25.240 05:22:43 -- bdevperf/common.sh@20 -- # cat 00:25:25.240 05:22:43 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:29.431 05:22:47 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-26 05:22:43.989580] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:29.431 [2024-07-26 05:22:43.990428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87240 ] 00:25:29.431 Using job config with 3 jobs 00:25:29.431 [2024-07-26 05:22:44.160038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.431 [2024-07-26 05:22:44.363308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.431 cpumask for '\''job0'\'' is too big 00:25:29.431 cpumask for '\''job1'\'' is too big 00:25:29.431 cpumask for '\''job2'\'' is too big 00:25:29.431 Running I/O for 2 seconds... 00:25:29.431 00:25:29.431 Latency(us) 00:25:29.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.431 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:29.431 Malloc0 : 2.01 41988.01 41.00 0.00 0.00 6090.47 1452.22 8996.31 00:25:29.431 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:29.431 Malloc0 : 2.01 41960.52 40.98 0.00 0.00 6084.53 1414.98 8400.52 00:25:29.431 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:29.431 Malloc0 : 2.01 41933.26 40.95 0.00 0.00 6077.57 1400.09 8340.95 00:25:29.431 =================================================================================================================== 00:25:29.431 Total : 125881.79 122.93 0.00 0.00 6084.19 1400.09 8996.31' 00:25:29.431 05:22:47 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-26 05:22:43.989580] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:29.431 [2024-07-26 05:22:43.990428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87240 ] 00:25:29.431 Using job config with 3 jobs 00:25:29.431 [2024-07-26 05:22:44.160038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.431 [2024-07-26 05:22:44.363308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.431 cpumask for '\''job0'\'' is too big 00:25:29.431 cpumask for '\''job1'\'' is too big 00:25:29.431 cpumask for '\''job2'\'' is too big 00:25:29.431 Running I/O for 2 seconds... 00:25:29.431 00:25:29.431 Latency(us) 00:25:29.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.431 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:29.431 Malloc0 : 2.01 41988.01 41.00 0.00 0.00 6090.47 1452.22 8996.31 00:25:29.431 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:29.431 Malloc0 : 2.01 41960.52 40.98 0.00 0.00 6084.53 1414.98 8400.52 00:25:29.431 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:29.431 Malloc0 : 2.01 41933.26 40.95 0.00 0.00 6077.57 1400.09 8340.95 00:25:29.431 =================================================================================================================== 00:25:29.431 Total : 125881.79 122.93 0.00 0.00 6084.19 1400.09 8996.31' 00:25:29.431 05:22:47 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:25:29.431 05:22:47 -- bdevperf/common.sh@32 -- # echo '[2024-07-26 05:22:43.989580] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:29.431 [2024-07-26 05:22:43.990428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87240 ] 00:25:29.431 Using job config with 3 jobs 00:25:29.431 [2024-07-26 05:22:44.160038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.431 [2024-07-26 05:22:44.363308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.431 cpumask for '\''job0'\'' is too big 00:25:29.431 cpumask for '\''job1'\'' is too big 00:25:29.431 cpumask for '\''job2'\'' is too big 00:25:29.431 Running I/O for 2 seconds... 00:25:29.431 00:25:29.431 Latency(us) 00:25:29.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.431 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:29.431 Malloc0 : 2.01 41988.01 41.00 0.00 0.00 6090.47 1452.22 8996.31 00:25:29.431 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:29.431 Malloc0 : 2.01 41960.52 40.98 0.00 0.00 6084.53 1414.98 8400.52 00:25:29.431 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:29.431 Malloc0 : 2.01 41933.26 40.95 0.00 0.00 6077.57 1400.09 8340.95 00:25:29.431 =================================================================================================================== 00:25:29.431 Total : 125881.79 122.93 0.00 0.00 6084.19 1400.09 8996.31' 00:25:29.431 05:22:47 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:25:29.431 05:22:47 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:25:29.431 05:22:47 -- bdevperf/test_config.sh@35 -- # cleanup 00:25:29.431 05:22:47 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:29.431 05:22:47 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:25:29.431 05:22:47 -- bdevperf/common.sh@8 -- # local job_section=global 00:25:29.431 05:22:47 -- bdevperf/common.sh@9 -- # local rw=rw 00:25:29.431 05:22:47 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:25:29.431 05:22:47 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:25:29.431 05:22:47 -- bdevperf/common.sh@13 -- # cat 00:25:29.431 00:25:29.431 05:22:47 -- bdevperf/common.sh@18 -- # job='[global]' 00:25:29.431 05:22:47 -- bdevperf/common.sh@19 -- # echo 00:25:29.431 05:22:47 -- bdevperf/common.sh@20 -- # cat 00:25:29.431 05:22:47 -- bdevperf/test_config.sh@38 -- # create_job job0 00:25:29.431 05:22:47 -- bdevperf/common.sh@8 -- # local job_section=job0 00:25:29.431 05:22:47 -- bdevperf/common.sh@9 -- # local rw= 00:25:29.431 05:22:47 -- bdevperf/common.sh@10 -- # local filename= 00:25:29.431 05:22:47 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:25:29.431 05:22:47 -- bdevperf/common.sh@18 -- # job='[job0]' 00:25:29.431 00:25:29.431 05:22:47 -- bdevperf/common.sh@19 -- # echo 00:25:29.431 05:22:47 -- bdevperf/common.sh@20 -- # cat 00:25:29.431 05:22:47 -- bdevperf/test_config.sh@39 -- # create_job job1 00:25:29.431 05:22:47 -- bdevperf/common.sh@8 -- # local job_section=job1 00:25:29.431 05:22:47 -- bdevperf/common.sh@9 -- # local rw= 00:25:29.431 05:22:47 -- bdevperf/common.sh@10 -- # local filename= 00:25:29.431 05:22:47 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:25:29.431 00:25:29.431 05:22:47 -- bdevperf/common.sh@18 -- # job='[job1]' 00:25:29.431 05:22:47 -- bdevperf/common.sh@19 -- # echo 00:25:29.431 05:22:47 -- bdevperf/common.sh@20 -- # cat 00:25:29.431 05:22:47 -- bdevperf/test_config.sh@40 -- # create_job job2 00:25:29.431 05:22:47 -- bdevperf/common.sh@8 -- # local job_section=job2 00:25:29.431 05:22:47 -- bdevperf/common.sh@9 -- # local rw= 00:25:29.431 05:22:47 -- bdevperf/common.sh@10 -- # local filename= 00:25:29.431 05:22:47 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:25:29.431 05:22:47 -- bdevperf/common.sh@18 -- # job='[job2]' 00:25:29.431 00:25:29.431 05:22:47 -- bdevperf/common.sh@19 -- # echo 00:25:29.431 05:22:47 -- bdevperf/common.sh@20 -- # cat 00:25:29.431 00:25:29.431 05:22:47 -- bdevperf/test_config.sh@41 -- # create_job job3 00:25:29.431 05:22:47 -- bdevperf/common.sh@8 -- # local job_section=job3 00:25:29.431 05:22:47 -- bdevperf/common.sh@9 -- # local rw= 00:25:29.431 05:22:47 -- bdevperf/common.sh@10 -- # local filename= 00:25:29.431 05:22:47 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:25:29.431 05:22:47 -- bdevperf/common.sh@18 -- # job='[job3]' 00:25:29.431 05:22:47 -- bdevperf/common.sh@19 -- # echo 00:25:29.431 05:22:47 -- bdevperf/common.sh@20 -- # cat 00:25:29.431 05:22:47 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:32.720 05:22:51 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-26 05:22:47.922315] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:32.720 [2024-07-26 05:22:47.922480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87288 ] 00:25:32.720 Using job config with 4 jobs 00:25:32.720 [2024-07-26 05:22:48.090721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.720 [2024-07-26 05:22:48.250693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.720 cpumask for '\''job0'\'' is too big 00:25:32.720 cpumask for '\''job1'\'' is too big 00:25:32.720 cpumask for '\''job2'\'' is too big 00:25:32.720 cpumask for '\''job3'\'' is too big 00:25:32.720 Running I/O for 2 seconds... 00:25:32.720 00:25:32.720 Latency(us) 00:25:32.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.720 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc0 : 2.02 15464.36 15.10 0.00 0.00 16542.41 3157.64 28716.68 00:25:32.720 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc1 : 2.02 15453.20 15.09 0.00 0.00 16539.93 4051.32 28478.37 00:25:32.720 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc0 : 2.03 15476.85 15.11 0.00 0.00 16462.77 3023.59 26214.40 00:25:32.720 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc1 : 2.04 15465.93 15.10 0.00 0.00 16463.09 3664.06 26810.18 00:25:32.720 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc0 : 2.04 15455.74 15.09 0.00 0.00 16428.95 3098.07 24307.90 00:25:32.720 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc1 : 2.04 15445.05 15.08 0.00 0.00 16425.97 3783.21 24665.37 00:25:32.720 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc0 : 2.04 15434.79 15.07 0.00 0.00 16387.35 3738.53 22639.71 00:25:32.720 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc1 : 2.04 15424.14 15.06 0.00 0.00 16383.45 5153.51 22758.87 00:25:32.720 =================================================================================================================== 00:25:32.720 Total : 123620.05 120.72 0.00 0.00 16454.06 3023.59 28716.68' 00:25:32.720 05:22:51 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-26 05:22:47.922315] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:32.720 [2024-07-26 05:22:47.922480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87288 ] 00:25:32.720 Using job config with 4 jobs 00:25:32.720 [2024-07-26 05:22:48.090721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.720 [2024-07-26 05:22:48.250693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.720 cpumask for '\''job0'\'' is too big 00:25:32.720 cpumask for '\''job1'\'' is too big 00:25:32.720 cpumask for '\''job2'\'' is too big 00:25:32.720 cpumask for '\''job3'\'' is too big 00:25:32.720 Running I/O for 2 seconds... 00:25:32.720 00:25:32.720 Latency(us) 00:25:32.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.720 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc0 : 2.02 15464.36 15.10 0.00 0.00 16542.41 3157.64 28716.68 00:25:32.720 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc1 : 2.02 15453.20 15.09 0.00 0.00 16539.93 4051.32 28478.37 00:25:32.720 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc0 : 2.03 15476.85 15.11 0.00 0.00 16462.77 3023.59 26214.40 00:25:32.720 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc1 : 2.04 15465.93 15.10 0.00 0.00 16463.09 3664.06 26810.18 00:25:32.720 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc0 : 2.04 15455.74 15.09 0.00 0.00 16428.95 3098.07 24307.90 00:25:32.720 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc1 : 2.04 15445.05 15.08 0.00 0.00 16425.97 3783.21 24665.37 00:25:32.720 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc0 : 2.04 15434.79 15.07 0.00 0.00 16387.35 3738.53 22639.71 00:25:32.720 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc1 : 2.04 15424.14 15.06 0.00 0.00 16383.45 5153.51 22758.87 00:25:32.720 =================================================================================================================== 00:25:32.720 Total : 123620.05 120.72 0.00 0.00 16454.06 3023.59 28716.68' 00:25:32.720 05:22:51 -- bdevperf/common.sh@32 -- # echo '[2024-07-26 05:22:47.922315] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:32.720 [2024-07-26 05:22:47.922480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87288 ] 00:25:32.720 Using job config with 4 jobs 00:25:32.720 [2024-07-26 05:22:48.090721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.720 [2024-07-26 05:22:48.250693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.720 cpumask for '\''job0'\'' is too big 00:25:32.720 cpumask for '\''job1'\'' is too big 00:25:32.720 cpumask for '\''job2'\'' is too big 00:25:32.720 cpumask for '\''job3'\'' is too big 00:25:32.720 Running I/O for 2 seconds... 00:25:32.720 00:25:32.720 Latency(us) 00:25:32.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.720 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc0 : 2.02 15464.36 15.10 0.00 0.00 16542.41 3157.64 28716.68 00:25:32.720 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc1 : 2.02 15453.20 15.09 0.00 0.00 16539.93 4051.32 28478.37 00:25:32.720 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc0 : 2.03 15476.85 15.11 0.00 0.00 16462.77 3023.59 26214.40 00:25:32.720 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc1 : 2.04 15465.93 15.10 0.00 0.00 16463.09 3664.06 26810.18 00:25:32.720 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.720 Malloc0 : 2.04 15455.74 15.09 0.00 0.00 16428.95 3098.07 24307.90 00:25:32.720 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.721 Malloc1 : 2.04 15445.05 15.08 0.00 0.00 16425.97 3783.21 24665.37 00:25:32.721 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.721 Malloc0 : 2.04 15434.79 15.07 0.00 0.00 16387.35 3738.53 22639.71 00:25:32.721 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:32.721 Malloc1 : 2.04 15424.14 15.06 0.00 0.00 16383.45 5153.51 22758.87 00:25:32.721 =================================================================================================================== 00:25:32.721 Total : 123620.05 120.72 0.00 0.00 16454.06 3023.59 28716.68' 00:25:32.721 05:22:51 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:25:32.721 05:22:51 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:25:32.721 05:22:51 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:25:32.721 05:22:51 -- bdevperf/test_config.sh@44 -- # cleanup 00:25:32.721 05:22:51 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:32.721 05:22:51 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:32.721 ************************************ 00:25:32.721 END TEST bdevperf_config 00:25:32.721 ************************************ 00:25:32.721 00:25:32.721 real 0m15.676s 00:25:32.721 user 0m14.150s 00:25:32.721 sys 0m1.035s 00:25:32.721 05:22:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.721 05:22:51 -- common/autotest_common.sh@10 -- # set +x 00:25:32.721 05:22:51 -- spdk/autotest.sh@198 -- # uname -s 00:25:32.721 05:22:51 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:25:32.721 05:22:51 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:25:32.721 05:22:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:32.721 05:22:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:32.721 05:22:51 -- common/autotest_common.sh@10 -- # set +x 00:25:32.721 ************************************ 00:25:32.721 START TEST reactor_set_interrupt 00:25:32.721 ************************************ 00:25:32.721 05:22:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:25:32.982 * Looking for test storage... 00:25:32.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:32.982 05:22:51 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:25:32.982 05:22:51 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:25:32.982 05:22:51 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:32.982 05:22:51 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:32.982 05:22:51 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:25:32.982 05:22:51 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:32.982 05:22:51 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:25:32.982 05:22:51 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:25:32.982 05:22:51 -- common/autotest_common.sh@34 -- # set -e 00:25:32.982 05:22:51 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:25:32.982 05:22:51 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:25:32.982 05:22:51 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:25:32.982 05:22:51 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:25:32.982 05:22:51 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:25:32.982 05:22:51 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:25:32.982 05:22:51 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:25:32.982 05:22:51 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:25:32.982 05:22:51 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:25:32.982 05:22:51 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:25:32.982 05:22:51 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:25:32.982 05:22:51 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:25:32.982 05:22:51 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:25:32.982 05:22:51 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:25:32.982 05:22:51 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:25:32.982 05:22:51 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:25:32.982 05:22:51 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:25:32.982 05:22:51 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:25:32.982 05:22:51 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:25:32.982 05:22:51 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:25:32.982 05:22:51 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:25:32.982 05:22:51 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:25:32.982 05:22:51 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:32.982 05:22:51 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:25:32.982 05:22:51 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:25:32.982 05:22:51 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:25:32.982 05:22:51 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:25:32.982 05:22:51 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:25:32.982 05:22:51 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:25:32.982 05:22:51 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:25:32.982 05:22:51 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:25:32.982 05:22:51 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:25:32.982 05:22:51 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:25:32.982 05:22:51 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:25:32.982 05:22:51 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:25:32.982 05:22:51 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:25:32.982 05:22:51 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:25:32.982 05:22:51 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:25:32.982 05:22:51 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:25:32.982 05:22:51 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:25:32.982 05:22:51 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:25:32.982 05:22:51 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:25:32.982 05:22:51 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:25:32.982 05:22:51 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:25:32.982 05:22:51 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:25:32.982 05:22:51 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:25:32.982 05:22:51 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:25:32.982 05:22:51 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:25:32.982 05:22:51 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:25:32.982 05:22:51 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:25:32.982 05:22:51 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:25:32.982 05:22:51 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:25:32.982 05:22:51 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:25:32.982 05:22:51 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:25:32.982 05:22:51 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:25:32.982 05:22:51 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:25:32.982 05:22:51 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:25:32.982 05:22:51 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:25:32.982 05:22:51 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:25:32.982 05:22:51 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:25:32.982 05:22:51 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:25:32.982 05:22:51 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:25:32.982 05:22:51 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:25:32.982 05:22:51 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:25:32.982 05:22:51 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:25:32.982 05:22:51 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:25:32.982 05:22:51 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:25:32.982 05:22:51 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:25:32.982 05:22:51 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:25:32.982 05:22:51 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:25:32.982 05:22:51 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:25:32.982 05:22:51 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:25:32.982 05:22:51 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:25:32.982 05:22:51 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:25:32.982 05:22:51 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:25:32.982 05:22:51 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:25:32.982 05:22:51 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:25:32.982 05:22:51 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:25:32.982 05:22:51 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:25:32.982 05:22:51 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:25:32.982 05:22:51 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:25:32.982 05:22:51 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:25:32.982 05:22:51 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:25:32.982 05:22:51 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:32.982 05:22:51 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:32.983 05:22:51 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:25:32.983 05:22:51 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:25:32.983 05:22:51 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:25:32.983 05:22:51 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:25:32.983 05:22:51 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:25:32.983 05:22:51 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:25:32.983 05:22:51 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:25:32.983 05:22:51 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:25:32.983 05:22:51 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:25:32.983 05:22:51 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:25:32.983 05:22:51 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:25:32.983 05:22:51 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:25:32.983 05:22:51 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:25:32.983 05:22:51 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:25:32.983 #define SPDK_CONFIG_H 00:25:32.983 #define SPDK_CONFIG_APPS 1 00:25:32.983 #define SPDK_CONFIG_ARCH native 00:25:32.983 #define SPDK_CONFIG_ASAN 1 00:25:32.983 #undef SPDK_CONFIG_AVAHI 00:25:32.983 #undef SPDK_CONFIG_CET 00:25:32.983 #define SPDK_CONFIG_COVERAGE 1 00:25:32.983 #define SPDK_CONFIG_CROSS_PREFIX 00:25:32.983 #undef SPDK_CONFIG_CRYPTO 00:25:32.983 #undef SPDK_CONFIG_CRYPTO_MLX5 00:25:32.983 #undef SPDK_CONFIG_CUSTOMOCF 00:25:32.983 #undef SPDK_CONFIG_DAOS 00:25:32.983 #define SPDK_CONFIG_DAOS_DIR 00:25:32.983 #define SPDK_CONFIG_DEBUG 1 00:25:32.983 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:25:32.983 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:25:32.983 #define SPDK_CONFIG_DPDK_INC_DIR 00:25:32.983 #define SPDK_CONFIG_DPDK_LIB_DIR 00:25:32.983 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:25:32.983 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:32.983 #define SPDK_CONFIG_EXAMPLES 1 00:25:32.983 #undef SPDK_CONFIG_FC 00:25:32.983 #define SPDK_CONFIG_FC_PATH 00:25:32.983 #define SPDK_CONFIG_FIO_PLUGIN 1 00:25:32.983 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:25:32.983 #undef SPDK_CONFIG_FUSE 00:25:32.983 #undef SPDK_CONFIG_FUZZER 00:25:32.983 #define SPDK_CONFIG_FUZZER_LIB 00:25:32.983 #undef SPDK_CONFIG_GOLANG 00:25:32.983 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:25:32.983 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:25:32.983 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:25:32.983 #undef SPDK_CONFIG_HAVE_LIBBSD 00:25:32.983 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:25:32.983 #define SPDK_CONFIG_IDXD 1 00:25:32.983 #define SPDK_CONFIG_IDXD_KERNEL 1 00:25:32.983 #undef SPDK_CONFIG_IPSEC_MB 00:25:32.983 #define SPDK_CONFIG_IPSEC_MB_DIR 00:25:32.983 #define SPDK_CONFIG_ISAL 1 00:25:32.983 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:25:32.983 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:25:32.983 #define SPDK_CONFIG_LIBDIR 00:25:32.983 #undef SPDK_CONFIG_LTO 00:25:32.983 #define SPDK_CONFIG_MAX_LCORES 00:25:32.983 #define SPDK_CONFIG_NVME_CUSE 1 00:25:32.983 #undef SPDK_CONFIG_OCF 00:25:32.983 #define SPDK_CONFIG_OCF_PATH 00:25:32.983 #define SPDK_CONFIG_OPENSSL_PATH 00:25:32.983 #undef SPDK_CONFIG_PGO_CAPTURE 00:25:32.983 #undef SPDK_CONFIG_PGO_USE 00:25:32.983 #define SPDK_CONFIG_PREFIX /usr/local 00:25:32.983 #define SPDK_CONFIG_RAID5F 1 00:25:32.983 #undef SPDK_CONFIG_RBD 00:25:32.983 #define SPDK_CONFIG_RDMA 1 00:25:32.983 #define SPDK_CONFIG_RDMA_PROV verbs 00:25:32.983 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:25:32.983 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:25:32.983 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:25:32.983 #undef SPDK_CONFIG_SHARED 00:25:32.983 #undef SPDK_CONFIG_SMA 00:25:32.983 #define SPDK_CONFIG_TESTS 1 00:25:32.983 #undef SPDK_CONFIG_TSAN 00:25:32.983 #define SPDK_CONFIG_UBLK 1 00:25:32.983 #define SPDK_CONFIG_UBSAN 1 00:25:32.983 #define SPDK_CONFIG_UNIT_TESTS 1 00:25:32.983 #undef SPDK_CONFIG_URING 00:25:32.983 #define SPDK_CONFIG_URING_PATH 00:25:32.983 #undef SPDK_CONFIG_URING_ZNS 00:25:32.983 #undef SPDK_CONFIG_USDT 00:25:32.983 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:25:32.983 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:25:32.983 #undef SPDK_CONFIG_VFIO_USER 00:25:32.983 #define SPDK_CONFIG_VFIO_USER_DIR 00:25:32.983 #define SPDK_CONFIG_VHOST 1 00:25:32.983 #define SPDK_CONFIG_VIRTIO 1 00:25:32.983 #undef SPDK_CONFIG_VTUNE 00:25:32.983 #define SPDK_CONFIG_VTUNE_DIR 00:25:32.983 #define SPDK_CONFIG_WERROR 1 00:25:32.983 #define SPDK_CONFIG_WPDK_DIR 00:25:32.983 #undef SPDK_CONFIG_XNVME 00:25:32.983 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:25:32.983 05:22:51 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:25:32.983 05:22:51 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:32.983 05:22:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.983 05:22:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.983 05:22:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.983 05:22:51 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:32.983 05:22:51 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:32.983 05:22:51 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:32.983 05:22:51 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:32.983 05:22:51 -- paths/export.sh@6 -- # export PATH 00:25:32.983 05:22:51 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:32.983 05:22:51 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:32.983 05:22:51 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:32.983 05:22:51 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:32.983 05:22:51 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:32.983 05:22:51 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:25:32.983 05:22:51 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:25:32.983 05:22:51 -- pm/common@16 -- # TEST_TAG=N/A 00:25:32.983 05:22:51 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:25:32.983 05:22:51 -- common/autotest_common.sh@52 -- # : 1 00:25:32.983 05:22:51 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:25:32.983 05:22:51 -- common/autotest_common.sh@56 -- # : 0 00:25:32.983 05:22:51 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:25:32.983 05:22:51 -- common/autotest_common.sh@58 -- # : 0 00:25:32.983 05:22:51 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:25:32.983 05:22:51 -- common/autotest_common.sh@60 -- # : 1 00:25:32.983 05:22:51 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:25:32.983 05:22:51 -- common/autotest_common.sh@62 -- # : 1 00:25:32.983 05:22:51 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:25:32.983 05:22:51 -- common/autotest_common.sh@64 -- # : 00:25:32.983 05:22:51 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:25:32.983 05:22:51 -- common/autotest_common.sh@66 -- # : 0 00:25:32.983 05:22:51 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:25:32.983 05:22:51 -- common/autotest_common.sh@68 -- # : 0 00:25:32.983 05:22:51 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:25:32.983 05:22:51 -- common/autotest_common.sh@70 -- # : 0 00:25:32.983 05:22:51 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:25:32.983 05:22:51 -- common/autotest_common.sh@72 -- # : 0 00:25:32.983 05:22:51 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:25:32.983 05:22:51 -- common/autotest_common.sh@74 -- # : 1 00:25:32.983 05:22:51 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:25:32.983 05:22:51 -- common/autotest_common.sh@76 -- # : 0 00:25:32.983 05:22:51 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:25:32.983 05:22:51 -- common/autotest_common.sh@78 -- # : 0 00:25:32.983 05:22:51 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:25:32.983 05:22:51 -- common/autotest_common.sh@80 -- # : 0 00:25:32.983 05:22:51 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:25:32.983 05:22:51 -- common/autotest_common.sh@82 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:25:32.984 05:22:51 -- common/autotest_common.sh@84 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:25:32.984 05:22:51 -- common/autotest_common.sh@86 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:25:32.984 05:22:51 -- common/autotest_common.sh@88 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:25:32.984 05:22:51 -- common/autotest_common.sh@90 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:25:32.984 05:22:51 -- common/autotest_common.sh@92 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:25:32.984 05:22:51 -- common/autotest_common.sh@94 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:25:32.984 05:22:51 -- common/autotest_common.sh@96 -- # : rdma 00:25:32.984 05:22:51 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:25:32.984 05:22:51 -- common/autotest_common.sh@98 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:25:32.984 05:22:51 -- common/autotest_common.sh@100 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:25:32.984 05:22:51 -- common/autotest_common.sh@102 -- # : 1 00:25:32.984 05:22:51 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:25:32.984 05:22:51 -- common/autotest_common.sh@104 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:25:32.984 05:22:51 -- common/autotest_common.sh@106 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:25:32.984 05:22:51 -- common/autotest_common.sh@108 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:25:32.984 05:22:51 -- common/autotest_common.sh@110 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:25:32.984 05:22:51 -- common/autotest_common.sh@112 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:25:32.984 05:22:51 -- common/autotest_common.sh@114 -- # : 1 00:25:32.984 05:22:51 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:25:32.984 05:22:51 -- common/autotest_common.sh@116 -- # : 1 00:25:32.984 05:22:51 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:25:32.984 05:22:51 -- common/autotest_common.sh@118 -- # : 00:25:32.984 05:22:51 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:25:32.984 05:22:51 -- common/autotest_common.sh@120 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:25:32.984 05:22:51 -- common/autotest_common.sh@122 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:25:32.984 05:22:51 -- common/autotest_common.sh@124 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:25:32.984 05:22:51 -- common/autotest_common.sh@126 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:25:32.984 05:22:51 -- common/autotest_common.sh@128 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:25:32.984 05:22:51 -- common/autotest_common.sh@130 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:25:32.984 05:22:51 -- common/autotest_common.sh@132 -- # : 00:25:32.984 05:22:51 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:25:32.984 05:22:51 -- common/autotest_common.sh@134 -- # : true 00:25:32.984 05:22:51 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:25:32.984 05:22:51 -- common/autotest_common.sh@136 -- # : 1 00:25:32.984 05:22:51 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:25:32.984 05:22:51 -- common/autotest_common.sh@138 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:25:32.984 05:22:51 -- common/autotest_common.sh@140 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:25:32.984 05:22:51 -- common/autotest_common.sh@142 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:25:32.984 05:22:51 -- common/autotest_common.sh@144 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:25:32.984 05:22:51 -- common/autotest_common.sh@146 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:25:32.984 05:22:51 -- common/autotest_common.sh@148 -- # : 00:25:32.984 05:22:51 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:25:32.984 05:22:51 -- common/autotest_common.sh@150 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:25:32.984 05:22:51 -- common/autotest_common.sh@152 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:25:32.984 05:22:51 -- common/autotest_common.sh@154 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:25:32.984 05:22:51 -- common/autotest_common.sh@156 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:25:32.984 05:22:51 -- common/autotest_common.sh@158 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:25:32.984 05:22:51 -- common/autotest_common.sh@160 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:25:32.984 05:22:51 -- common/autotest_common.sh@163 -- # : 00:25:32.984 05:22:51 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:25:32.984 05:22:51 -- common/autotest_common.sh@165 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:25:32.984 05:22:51 -- common/autotest_common.sh@167 -- # : 0 00:25:32.984 05:22:51 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:25:32.984 05:22:51 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:32.984 05:22:51 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:32.984 05:22:51 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:25:32.984 05:22:51 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:25:32.984 05:22:51 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:32.984 05:22:51 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:32.984 05:22:51 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:32.984 05:22:51 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:32.984 05:22:51 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:25:32.984 05:22:51 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:25:32.984 05:22:51 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:32.984 05:22:51 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:32.984 05:22:51 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:25:32.984 05:22:51 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:25:32.984 05:22:51 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:32.984 05:22:51 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:32.984 05:22:51 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:32.984 05:22:51 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:32.984 05:22:51 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:25:32.984 05:22:51 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:25:32.984 05:22:51 -- common/autotest_common.sh@196 -- # cat 00:25:32.984 05:22:51 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:25:32.984 05:22:51 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:32.984 05:22:51 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:32.984 05:22:51 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:32.984 05:22:51 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:32.984 05:22:51 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:25:32.984 05:22:51 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:25:32.984 05:22:51 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:32.984 05:22:51 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:32.984 05:22:51 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:32.984 05:22:51 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:32.984 05:22:51 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:25:32.984 05:22:51 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:25:32.984 05:22:51 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:32.984 05:22:51 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:32.984 05:22:51 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:32.984 05:22:51 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:32.985 05:22:51 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:32.985 05:22:51 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:32.985 05:22:51 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:25:32.985 05:22:51 -- common/autotest_common.sh@249 -- # export valgrind= 00:25:32.985 05:22:51 -- common/autotest_common.sh@249 -- # valgrind= 00:25:32.985 05:22:51 -- common/autotest_common.sh@255 -- # uname -s 00:25:32.985 05:22:51 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:25:32.985 05:22:51 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:25:32.985 05:22:51 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:25:32.985 05:22:51 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:25:32.985 05:22:51 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:25:32.985 05:22:51 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:25:32.985 05:22:51 -- common/autotest_common.sh@265 -- # MAKE=make 00:25:32.985 05:22:51 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:25:32.985 05:22:51 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:25:32.985 05:22:51 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:25:32.985 05:22:51 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:25:32.985 05:22:51 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:25:32.985 05:22:51 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:25:32.985 05:22:51 -- common/autotest_common.sh@309 -- # [[ -z 87367 ]] 00:25:32.985 05:22:51 -- common/autotest_common.sh@309 -- # kill -0 87367 00:25:32.985 05:22:51 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:25:32.985 05:22:51 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:25:32.985 05:22:51 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:25:32.985 05:22:51 -- common/autotest_common.sh@322 -- # local mount target_dir 00:25:32.985 05:22:51 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:25:32.985 05:22:51 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:25:32.985 05:22:51 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:25:32.985 05:22:51 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:25:32.985 05:22:51 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.7YROYG 00:25:32.985 05:22:51 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:25:32.985 05:22:51 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:25:32.985 05:22:51 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:25:32.985 05:22:51 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.7YROYG/tests/interrupt /tmp/spdk.7YROYG 00:25:32.985 05:22:51 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:25:32.985 05:22:51 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:32.985 05:22:52 -- common/autotest_common.sh@318 -- # df -T 00:25:32.985 05:22:52 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249308672 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254023168 00:25:32.985 05:22:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=4714496 00:25:32.985 05:22:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=10286374912 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=19681529856 00:25:32.985 05:22:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=9378377728 00:25:32.985 05:22:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=6268854272 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6270111744 00:25:32.985 05:22:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:25:32.985 05:22:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:25:32.985 05:22:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:25:32.985 05:22:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda16 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=777306112 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=923156480 00:25:32.985 05:22:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=81207296 00:25:32.985 05:22:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=103000064 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:25:32.985 05:22:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=6395904 00:25:32.985 05:22:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254006784 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254019072 00:25:32.985 05:22:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=12288 00:25:32.985 05:22:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:25:32.985 05:22:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=98727718912 00:25:32.985 05:22:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:25:32.985 05:22:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=975060992 00:25:32.985 05:22:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:32.985 05:22:52 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:25:32.985 * Looking for test storage... 00:25:32.985 05:22:52 -- common/autotest_common.sh@359 -- # local target_space new_size 00:25:32.985 05:22:52 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:25:32.985 05:22:52 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:32.985 05:22:52 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:25:32.985 05:22:52 -- common/autotest_common.sh@363 -- # mount=/ 00:25:32.985 05:22:52 -- common/autotest_common.sh@365 -- # target_space=10286374912 00:25:32.985 05:22:52 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:25:32.985 05:22:52 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:25:32.985 05:22:52 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:25:32.985 05:22:52 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:25:32.985 05:22:52 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:25:32.985 05:22:52 -- common/autotest_common.sh@372 -- # new_size=11592970240 00:25:32.985 05:22:52 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:25:32.985 05:22:52 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:32.985 05:22:52 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:32.985 05:22:52 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:32.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:32.985 05:22:52 -- common/autotest_common.sh@380 -- # return 0 00:25:32.985 05:22:52 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:25:32.985 05:22:52 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:25:32.985 05:22:52 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:25:32.985 05:22:52 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:25:32.985 05:22:52 -- common/autotest_common.sh@1672 -- # true 00:25:32.985 05:22:52 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:25:32.985 05:22:52 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:25:32.985 05:22:52 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:25:32.985 05:22:52 -- common/autotest_common.sh@27 -- # exec 00:25:32.985 05:22:52 -- common/autotest_common.sh@29 -- # exec 00:25:32.985 05:22:52 -- common/autotest_common.sh@31 -- # xtrace_restore 00:25:32.985 05:22:52 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:25:32.985 05:22:52 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:25:32.985 05:22:52 -- common/autotest_common.sh@18 -- # set -x 00:25:32.985 05:22:52 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:32.985 05:22:52 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:25:32.985 05:22:52 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:25:32.985 05:22:52 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:25:32.985 05:22:52 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:25:32.985 05:22:52 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:25:32.985 05:22:52 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:32.985 05:22:52 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:32.986 05:22:52 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:25:32.986 05:22:52 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.986 05:22:52 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:25:32.986 05:22:52 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=87406 00:25:32.986 05:22:52 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:32.986 05:22:52 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 87406 /var/tmp/spdk.sock 00:25:32.986 05:22:52 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:25:32.986 05:22:52 -- common/autotest_common.sh@819 -- # '[' -z 87406 ']' 00:25:32.986 05:22:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.986 05:22:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:32.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.986 05:22:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.986 05:22:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:32.986 05:22:52 -- common/autotest_common.sh@10 -- # set +x 00:25:32.986 [2024-07-26 05:22:52.084104] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:32.986 [2024-07-26 05:22:52.084267] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87406 ] 00:25:33.245 [2024-07-26 05:22:52.254186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:33.504 [2024-07-26 05:22:52.409352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.504 [2024-07-26 05:22:52.409471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.504 [2024-07-26 05:22:52.409500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.763 [2024-07-26 05:22:52.623753] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:34.022 05:22:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:34.022 05:22:53 -- common/autotest_common.sh@852 -- # return 0 00:25:34.022 05:22:53 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:25:34.022 05:22:53 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:34.281 Malloc0 00:25:34.281 Malloc1 00:25:34.281 Malloc2 00:25:34.281 05:22:53 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:25:34.281 05:22:53 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:25:34.281 05:22:53 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:25:34.281 05:22:53 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:25:34.539 5000+0 records in 00:25:34.539 5000+0 records out 00:25:34.539 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0224358 s, 456 MB/s 00:25:34.540 05:22:53 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:25:34.540 AIO0 00:25:34.540 05:22:53 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 87406 00:25:34.540 05:22:53 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 87406 without_thd 00:25:34.540 05:22:53 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=87406 00:25:34.540 05:22:53 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:25:34.540 05:22:53 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:25:34.540 05:22:53 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:25:34.540 05:22:53 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:25:34.540 05:22:53 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:25:34.540 05:22:53 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:25:34.540 05:22:53 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:34.540 05:22:53 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:25:34.540 05:22:53 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:34.798 05:22:53 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:25:34.798 05:22:53 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:25:34.798 05:22:53 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:25:34.798 05:22:53 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:25:34.798 05:22:53 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:25:34.798 05:22:53 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:25:34.798 05:22:53 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:34.798 05:22:53 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:25:34.798 05:22:53 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:35.056 05:22:54 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:25:35.056 05:22:54 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:25:35.056 spdk_thread ids are 1 on reactor0. 00:25:35.056 05:22:54 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:25:35.056 05:22:54 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:35.056 05:22:54 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87406 0 00:25:35.056 05:22:54 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87406 0 idle 00:25:35.056 05:22:54 -- interrupt/interrupt_common.sh@33 -- # local pid=87406 00:25:35.056 05:22:54 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:35.056 05:22:54 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:35.056 05:22:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:35.056 05:22:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:35.057 05:22:54 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:35.057 05:22:54 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:35.057 05:22:54 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:35.057 05:22:54 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87406 -w 256 00:25:35.057 05:22:54 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87406 root 20 0 20.1t 148608 29696 S 10.0 1.2 0:00.61 reactor_0' 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@48 -- # echo 87406 root 20 0 20.1t 148608 29696 S 10.0 1.2 0:00.61 reactor_0 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=10.0 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=10 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@53 -- # [[ 10 -gt 30 ]] 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:35.316 05:22:54 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:35.316 05:22:54 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87406 1 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87406 1 idle 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@33 -- # local pid=87406 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87406 -w 256 00:25:35.316 05:22:54 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87419 root 20 0 20.1t 148608 29696 S 0.0 1.2 0:00.00 reactor_1' 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@48 -- # echo 87419 root 20 0 20.1t 148608 29696 S 0.0 1.2 0:00.00 reactor_1 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:35.575 05:22:54 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:35.575 05:22:54 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87406 2 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87406 2 idle 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@33 -- # local pid=87406 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87406 -w 256 00:25:35.575 05:22:54 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:35.834 05:22:54 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87420 root 20 0 20.1t 148608 29696 S 0.0 1.2 0:00.00 reactor_2' 00:25:35.834 05:22:54 -- interrupt/interrupt_common.sh@48 -- # echo 87420 root 20 0 20.1t 148608 29696 S 0.0 1.2 0:00.00 reactor_2 00:25:35.834 05:22:54 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:35.834 05:22:54 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:35.834 05:22:54 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:35.834 05:22:54 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:35.834 05:22:54 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:35.834 05:22:54 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:35.834 05:22:54 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:35.834 05:22:54 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:35.834 05:22:54 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:25:35.834 05:22:54 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:25:35.834 05:22:54 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:25:36.093 [2024-07-26 05:22:54.980571] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:36.093 05:22:54 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:25:36.353 [2024-07-26 05:22:55.228260] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:25:36.353 [2024-07-26 05:22:55.229189] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:36.353 05:22:55 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:25:36.353 [2024-07-26 05:22:55.412103] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:25:36.353 [2024-07-26 05:22:55.413137] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:36.353 05:22:55 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:36.353 05:22:55 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87406 0 00:25:36.353 05:22:55 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87406 0 busy 00:25:36.353 05:22:55 -- interrupt/interrupt_common.sh@33 -- # local pid=87406 00:25:36.353 05:22:55 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:36.353 05:22:55 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:36.353 05:22:55 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:36.353 05:22:55 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:36.353 05:22:55 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:36.353 05:22:55 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:36.353 05:22:55 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87406 -w 256 00:25:36.353 05:22:55 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:36.632 05:22:55 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87406 root 20 0 20.1t 152064 29696 R 90.9 1.2 0:01.03 reactor_0' 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@48 -- # echo 87406 root 20 0 20.1t 152064 29696 R 90.9 1.2 0:01.03 reactor_0 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=90.9 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=90 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@51 -- # [[ 90 -lt 70 ]] 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:36.633 05:22:55 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:36.633 05:22:55 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87406 2 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87406 2 busy 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@33 -- # local pid=87406 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87406 -w 256 00:25:36.633 05:22:55 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:36.903 05:22:55 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87420 root 20 0 20.1t 152064 29696 R 99.9 1.2 0:00.44 reactor_2' 00:25:36.903 05:22:55 -- interrupt/interrupt_common.sh@48 -- # echo 87420 root 20 0 20.1t 152064 29696 R 99.9 1.2 0:00.44 reactor_2 00:25:36.903 05:22:55 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:36.903 05:22:55 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:36.903 05:22:55 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:25:36.903 05:22:55 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:25:36.904 05:22:55 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:36.904 05:22:55 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:25:36.904 05:22:55 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:36.904 05:22:55 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:36.904 05:22:55 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:25:37.163 [2024-07-26 05:22:56.092161] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:25:37.163 [2024-07-26 05:22:56.092960] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:37.163 05:22:56 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:25:37.163 05:22:56 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 87406 2 00:25:37.163 05:22:56 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87406 2 idle 00:25:37.163 05:22:56 -- interrupt/interrupt_common.sh@33 -- # local pid=87406 00:25:37.163 05:22:56 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:37.163 05:22:56 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:37.163 05:22:56 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:37.163 05:22:56 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:37.163 05:22:56 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:37.163 05:22:56 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:37.163 05:22:56 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:37.163 05:22:56 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87406 -w 256 00:25:37.163 05:22:56 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:37.422 05:22:56 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87420 root 20 0 20.1t 152064 29696 S 0.0 1.2 0:00.67 reactor_2' 00:25:37.422 05:22:56 -- interrupt/interrupt_common.sh@48 -- # echo 87420 root 20 0 20.1t 152064 29696 S 0.0 1.2 0:00.67 reactor_2 00:25:37.422 05:22:56 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:37.422 05:22:56 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:37.422 05:22:56 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:37.422 05:22:56 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:37.422 05:22:56 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:37.422 05:22:56 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:37.422 05:22:56 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:37.422 05:22:56 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:37.422 05:22:56 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:25:37.681 [2024-07-26 05:22:56.548102] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:25:37.681 [2024-07-26 05:22:56.548836] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:37.681 05:22:56 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:25:37.681 05:22:56 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:25:37.681 05:22:56 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:25:37.681 [2024-07-26 05:22:56.788552] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:37.941 05:22:56 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 87406 0 00:25:37.941 05:22:56 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87406 0 idle 00:25:37.941 05:22:56 -- interrupt/interrupt_common.sh@33 -- # local pid=87406 00:25:37.941 05:22:56 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:37.941 05:22:56 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:37.941 05:22:56 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:37.941 05:22:56 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:37.941 05:22:56 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:37.941 05:22:56 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:37.941 05:22:56 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:37.941 05:22:56 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:37.941 05:22:56 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87406 -w 256 00:25:37.941 05:22:57 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87406 root 20 0 20.1t 152192 29696 S 0.0 1.2 0:01.94 reactor_0' 00:25:37.941 05:22:57 -- interrupt/interrupt_common.sh@48 -- # echo 87406 root 20 0 20.1t 152192 29696 S 0.0 1.2 0:01.94 reactor_0 00:25:37.941 05:22:57 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:37.941 05:22:57 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:37.941 05:22:57 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:37.941 05:22:57 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:37.941 05:22:57 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:37.941 05:22:57 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:37.941 05:22:57 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:37.941 05:22:57 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:37.941 05:22:57 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:25:37.941 05:22:57 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:25:37.941 05:22:57 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:25:37.941 05:22:57 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 87406 00:25:37.941 05:22:57 -- common/autotest_common.sh@926 -- # '[' -z 87406 ']' 00:25:37.941 05:22:57 -- common/autotest_common.sh@930 -- # kill -0 87406 00:25:37.941 05:22:57 -- common/autotest_common.sh@931 -- # uname 00:25:37.941 05:22:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:37.941 05:22:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87406 00:25:38.200 05:22:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:38.200 05:22:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:38.200 killing process with pid 87406 00:25:38.200 05:22:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87406' 00:25:38.200 05:22:57 -- common/autotest_common.sh@945 -- # kill 87406 00:25:38.200 05:22:57 -- common/autotest_common.sh@950 -- # wait 87406 00:25:39.137 05:22:58 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:25:39.137 05:22:58 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:25:39.138 05:22:58 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:25:39.138 05:22:58 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.138 05:22:58 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:25:39.138 05:22:58 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=87549 00:25:39.138 05:22:58 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:25:39.138 05:22:58 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:39.138 05:22:58 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 87549 /var/tmp/spdk.sock 00:25:39.138 05:22:58 -- common/autotest_common.sh@819 -- # '[' -z 87549 ']' 00:25:39.138 05:22:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.138 05:22:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:39.138 05:22:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.138 05:22:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:39.138 05:22:58 -- common/autotest_common.sh@10 -- # set +x 00:25:39.138 [2024-07-26 05:22:58.202370] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:39.138 [2024-07-26 05:22:58.202522] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87549 ] 00:25:39.397 [2024-07-26 05:22:58.355561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:39.656 [2024-07-26 05:22:58.516990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.657 [2024-07-26 05:22:58.517047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.657 [2024-07-26 05:22:58.517066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.657 [2024-07-26 05:22:58.727395] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:40.225 05:22:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:40.225 05:22:59 -- common/autotest_common.sh@852 -- # return 0 00:25:40.225 05:22:59 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:25:40.225 05:22:59 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:40.484 Malloc0 00:25:40.484 Malloc1 00:25:40.484 Malloc2 00:25:40.484 05:22:59 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:25:40.484 05:22:59 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:25:40.484 05:22:59 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:25:40.484 05:22:59 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:25:40.484 5000+0 records in 00:25:40.484 5000+0 records out 00:25:40.484 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0217732 s, 470 MB/s 00:25:40.484 05:22:59 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:25:40.744 AIO0 00:25:40.744 05:22:59 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 87549 00:25:40.744 05:22:59 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 87549 00:25:40.744 05:22:59 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=87549 00:25:40.744 05:22:59 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:25:40.744 05:22:59 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:25:40.744 05:22:59 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:25:40.744 05:22:59 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:25:40.744 05:22:59 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:25:40.744 05:22:59 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:25:40.744 05:22:59 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:40.744 05:22:59 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:25:40.744 05:22:59 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:41.003 05:22:59 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:25:41.003 05:22:59 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:25:41.003 05:22:59 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:25:41.003 05:22:59 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:25:41.003 05:22:59 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:25:41.003 05:22:59 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:25:41.003 05:22:59 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:41.003 05:22:59 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:25:41.003 05:22:59 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:25:41.263 05:23:00 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:25:41.263 spdk_thread ids are 1 on reactor0. 00:25:41.263 05:23:00 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:25:41.263 05:23:00 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:41.263 05:23:00 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87549 0 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87549 0 idle 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@33 -- # local pid=87549 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87549 -w 256 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87549 root 20 0 20.1t 148992 29952 S 0.0 1.2 0:00.57 reactor_0' 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@48 -- # echo 87549 root 20 0 20.1t 148992 29952 S 0.0 1.2 0:00.57 reactor_0 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:41.263 05:23:00 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:41.263 05:23:00 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87549 1 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87549 1 idle 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@33 -- # local pid=87549 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87549 -w 256 00:25:41.263 05:23:00 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87562 root 20 0 20.1t 148992 29952 S 0.0 1.2 0:00.00 reactor_1' 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@48 -- # echo 87562 root 20 0 20.1t 148992 29952 S 0.0 1.2 0:00.00 reactor_1 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:41.523 05:23:00 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:41.523 05:23:00 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87549 2 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87549 2 idle 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@33 -- # local pid=87549 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87549 -w 256 00:25:41.523 05:23:00 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:41.782 05:23:00 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87563 root 20 0 20.1t 148992 29952 S 0.0 1.2 0:00.00 reactor_2' 00:25:41.782 05:23:00 -- interrupt/interrupt_common.sh@48 -- # echo 87563 root 20 0 20.1t 148992 29952 S 0.0 1.2 0:00.00 reactor_2 00:25:41.782 05:23:00 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:41.782 05:23:00 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:41.782 05:23:00 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:41.782 05:23:00 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:41.782 05:23:00 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:41.782 05:23:00 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:41.782 05:23:00 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:41.782 05:23:00 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:41.782 05:23:00 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:25:41.782 05:23:00 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:25:42.040 [2024-07-26 05:23:01.035758] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:25:42.040 [2024-07-26 05:23:01.036067] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:25:42.040 [2024-07-26 05:23:01.037243] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:42.040 05:23:01 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:25:42.298 [2024-07-26 05:23:01.287581] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:25:42.298 [2024-07-26 05:23:01.288658] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:42.298 05:23:01 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:42.298 05:23:01 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87549 0 00:25:42.298 05:23:01 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87549 0 busy 00:25:42.298 05:23:01 -- interrupt/interrupt_common.sh@33 -- # local pid=87549 00:25:42.298 05:23:01 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:42.298 05:23:01 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:42.298 05:23:01 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:42.298 05:23:01 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:42.298 05:23:01 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:42.298 05:23:01 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:42.298 05:23:01 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:42.298 05:23:01 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87549 -w 256 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87549 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:01.07 reactor_0' 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@48 -- # echo 87549 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:01.07 reactor_0 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:42.556 05:23:01 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:42.556 05:23:01 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87549 2 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87549 2 busy 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@33 -- # local pid=87549 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87549 -w 256 00:25:42.556 05:23:01 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:42.815 05:23:01 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87563 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:00.44 reactor_2' 00:25:42.815 05:23:01 -- interrupt/interrupt_common.sh@48 -- # echo 87563 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:00.44 reactor_2 00:25:42.815 05:23:01 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:42.815 05:23:01 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:42.815 05:23:01 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:25:42.815 05:23:01 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:25:42.815 05:23:01 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:42.815 05:23:01 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:25:42.815 05:23:01 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:42.815 05:23:01 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:42.815 05:23:01 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:25:42.815 [2024-07-26 05:23:01.923739] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:25:42.815 [2024-07-26 05:23:01.924046] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:43.074 05:23:01 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:25:43.074 05:23:01 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 87549 2 00:25:43.074 05:23:01 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87549 2 idle 00:25:43.074 05:23:01 -- interrupt/interrupt_common.sh@33 -- # local pid=87549 00:25:43.074 05:23:01 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:43.074 05:23:01 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:43.074 05:23:01 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:43.074 05:23:01 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:43.074 05:23:01 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:43.074 05:23:01 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:43.074 05:23:01 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:43.074 05:23:01 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87549 -w 256 00:25:43.074 05:23:01 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:43.074 05:23:02 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87563 root 20 0 20.1t 152320 29952 S 0.0 1.2 0:00.62 reactor_2' 00:25:43.074 05:23:02 -- interrupt/interrupt_common.sh@48 -- # echo 87563 root 20 0 20.1t 152320 29952 S 0.0 1.2 0:00.62 reactor_2 00:25:43.074 05:23:02 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:43.074 05:23:02 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:43.074 05:23:02 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:43.074 05:23:02 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:43.074 05:23:02 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:43.074 05:23:02 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:43.074 05:23:02 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:43.074 05:23:02 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:43.074 05:23:02 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:25:43.334 [2024-07-26 05:23:02.359821] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:25:43.334 [2024-07-26 05:23:02.360324] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:25:43.334 [2024-07-26 05:23:02.360385] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:43.334 05:23:02 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:25:43.334 05:23:02 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 87549 0 00:25:43.334 05:23:02 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87549 0 idle 00:25:43.334 05:23:02 -- interrupt/interrupt_common.sh@33 -- # local pid=87549 00:25:43.334 05:23:02 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:43.334 05:23:02 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:43.334 05:23:02 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:43.334 05:23:02 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:43.334 05:23:02 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:43.334 05:23:02 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:43.334 05:23:02 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:43.334 05:23:02 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87549 -w 256 00:25:43.334 05:23:02 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:43.596 05:23:02 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87549 root 20 0 20.1t 152320 29952 S 0.0 1.2 0:01.92 reactor_0' 00:25:43.596 05:23:02 -- interrupt/interrupt_common.sh@48 -- # echo 87549 root 20 0 20.1t 152320 29952 S 0.0 1.2 0:01.92 reactor_0 00:25:43.596 05:23:02 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:43.596 05:23:02 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:43.596 05:23:02 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:43.596 05:23:02 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:43.596 05:23:02 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:43.596 05:23:02 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:43.596 05:23:02 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:43.596 05:23:02 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:43.596 05:23:02 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:25:43.596 05:23:02 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:25:43.596 05:23:02 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:25:43.596 05:23:02 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 87549 00:25:43.596 05:23:02 -- common/autotest_common.sh@926 -- # '[' -z 87549 ']' 00:25:43.596 05:23:02 -- common/autotest_common.sh@930 -- # kill -0 87549 00:25:43.596 05:23:02 -- common/autotest_common.sh@931 -- # uname 00:25:43.596 05:23:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:43.596 05:23:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87549 00:25:43.596 killing process with pid 87549 00:25:43.596 05:23:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:43.596 05:23:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:43.596 05:23:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87549' 00:25:43.596 05:23:02 -- common/autotest_common.sh@945 -- # kill 87549 00:25:43.596 05:23:02 -- common/autotest_common.sh@950 -- # wait 87549 00:25:44.973 05:23:03 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:25:44.973 05:23:03 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:25:44.973 ************************************ 00:25:44.973 END TEST reactor_set_interrupt 00:25:44.973 ************************************ 00:25:44.973 00:25:44.973 real 0m11.909s 00:25:44.973 user 0m11.617s 00:25:44.973 sys 0m1.580s 00:25:44.973 05:23:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.973 05:23:03 -- common/autotest_common.sh@10 -- # set +x 00:25:44.973 05:23:03 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:25:44.973 05:23:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:44.973 05:23:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:44.973 05:23:03 -- common/autotest_common.sh@10 -- # set +x 00:25:44.973 ************************************ 00:25:44.973 START TEST reap_unregistered_poller 00:25:44.973 ************************************ 00:25:44.973 05:23:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:25:44.973 * Looking for test storage... 00:25:44.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:44.973 05:23:03 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:25:44.973 05:23:03 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:25:44.973 05:23:03 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:44.973 05:23:03 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:44.973 05:23:03 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:25:44.973 05:23:03 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:44.973 05:23:03 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:25:44.973 05:23:03 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:25:44.973 05:23:03 -- common/autotest_common.sh@34 -- # set -e 00:25:44.973 05:23:03 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:25:44.973 05:23:03 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:25:44.973 05:23:03 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:25:44.973 05:23:03 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:25:44.973 05:23:03 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:25:44.973 05:23:03 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:25:44.973 05:23:03 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:25:44.973 05:23:03 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:25:44.973 05:23:03 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:25:44.973 05:23:03 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:25:44.973 05:23:03 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:25:44.973 05:23:03 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:25:44.973 05:23:03 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:25:44.973 05:23:03 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:25:44.973 05:23:03 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:25:44.973 05:23:03 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:25:44.973 05:23:03 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:25:44.973 05:23:03 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:25:44.973 05:23:03 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:25:44.973 05:23:03 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:25:44.973 05:23:03 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:25:44.973 05:23:03 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:25:44.973 05:23:03 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:44.973 05:23:03 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:25:44.973 05:23:03 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:25:44.973 05:23:03 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:25:44.973 05:23:03 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:25:44.973 05:23:03 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:25:44.973 05:23:03 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:25:44.973 05:23:03 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:25:44.973 05:23:03 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:25:44.973 05:23:03 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:25:44.973 05:23:03 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:25:44.973 05:23:03 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:25:44.973 05:23:03 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:25:44.973 05:23:03 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:25:44.973 05:23:03 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:25:44.973 05:23:03 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:25:44.973 05:23:03 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:25:44.973 05:23:03 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:25:44.973 05:23:03 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:25:44.973 05:23:03 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:25:44.973 05:23:03 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:25:44.973 05:23:03 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:25:44.973 05:23:03 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:25:44.973 05:23:03 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:25:44.973 05:23:03 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:25:44.973 05:23:03 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:25:44.973 05:23:03 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:25:44.973 05:23:03 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:25:44.973 05:23:03 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:25:44.973 05:23:03 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:25:44.973 05:23:03 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:25:44.973 05:23:03 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:25:44.973 05:23:03 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:25:44.973 05:23:03 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:25:44.973 05:23:03 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:25:44.973 05:23:03 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:25:44.973 05:23:03 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:25:44.973 05:23:03 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:25:44.973 05:23:03 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:25:44.973 05:23:03 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:25:44.973 05:23:03 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:25:44.973 05:23:03 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:25:44.973 05:23:03 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:25:44.973 05:23:03 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:25:44.973 05:23:03 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:25:44.973 05:23:03 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:25:44.973 05:23:03 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:25:44.973 05:23:03 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:25:44.973 05:23:03 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:25:44.973 05:23:03 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:25:44.973 05:23:03 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:25:44.973 05:23:03 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:25:44.973 05:23:03 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:25:44.973 05:23:03 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:25:44.973 05:23:03 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:25:44.973 05:23:03 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:25:44.973 05:23:03 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:25:44.973 05:23:03 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:25:44.973 05:23:03 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:25:44.973 05:23:03 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:25:44.973 05:23:03 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:25:44.973 05:23:03 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:44.973 05:23:03 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:44.973 05:23:03 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:25:44.973 05:23:03 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:25:44.973 05:23:03 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:25:44.973 05:23:03 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:25:44.973 05:23:03 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:25:44.973 05:23:03 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:25:44.973 05:23:03 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:25:44.973 05:23:03 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:25:44.974 05:23:03 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:25:44.974 05:23:03 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:25:44.974 05:23:03 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:25:44.974 05:23:03 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:25:44.974 05:23:03 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:25:44.974 05:23:03 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:25:44.974 #define SPDK_CONFIG_H 00:25:44.974 #define SPDK_CONFIG_APPS 1 00:25:44.974 #define SPDK_CONFIG_ARCH native 00:25:44.974 #define SPDK_CONFIG_ASAN 1 00:25:44.974 #undef SPDK_CONFIG_AVAHI 00:25:44.974 #undef SPDK_CONFIG_CET 00:25:44.974 #define SPDK_CONFIG_COVERAGE 1 00:25:44.974 #define SPDK_CONFIG_CROSS_PREFIX 00:25:44.974 #undef SPDK_CONFIG_CRYPTO 00:25:44.974 #undef SPDK_CONFIG_CRYPTO_MLX5 00:25:44.974 #undef SPDK_CONFIG_CUSTOMOCF 00:25:44.974 #undef SPDK_CONFIG_DAOS 00:25:44.974 #define SPDK_CONFIG_DAOS_DIR 00:25:44.974 #define SPDK_CONFIG_DEBUG 1 00:25:44.974 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:25:44.974 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:25:44.974 #define SPDK_CONFIG_DPDK_INC_DIR 00:25:44.974 #define SPDK_CONFIG_DPDK_LIB_DIR 00:25:44.974 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:25:44.974 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:44.974 #define SPDK_CONFIG_EXAMPLES 1 00:25:44.974 #undef SPDK_CONFIG_FC 00:25:44.974 #define SPDK_CONFIG_FC_PATH 00:25:44.974 #define SPDK_CONFIG_FIO_PLUGIN 1 00:25:44.974 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:25:44.974 #undef SPDK_CONFIG_FUSE 00:25:44.974 #undef SPDK_CONFIG_FUZZER 00:25:44.974 #define SPDK_CONFIG_FUZZER_LIB 00:25:44.974 #undef SPDK_CONFIG_GOLANG 00:25:44.974 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:25:44.974 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:25:44.974 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:25:44.974 #undef SPDK_CONFIG_HAVE_LIBBSD 00:25:44.974 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:25:44.974 #define SPDK_CONFIG_IDXD 1 00:25:44.974 #define SPDK_CONFIG_IDXD_KERNEL 1 00:25:44.974 #undef SPDK_CONFIG_IPSEC_MB 00:25:44.974 #define SPDK_CONFIG_IPSEC_MB_DIR 00:25:44.974 #define SPDK_CONFIG_ISAL 1 00:25:44.974 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:25:44.974 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:25:44.974 #define SPDK_CONFIG_LIBDIR 00:25:44.974 #undef SPDK_CONFIG_LTO 00:25:44.974 #define SPDK_CONFIG_MAX_LCORES 00:25:44.974 #define SPDK_CONFIG_NVME_CUSE 1 00:25:44.974 #undef SPDK_CONFIG_OCF 00:25:44.974 #define SPDK_CONFIG_OCF_PATH 00:25:44.974 #define SPDK_CONFIG_OPENSSL_PATH 00:25:44.974 #undef SPDK_CONFIG_PGO_CAPTURE 00:25:44.974 #undef SPDK_CONFIG_PGO_USE 00:25:44.974 #define SPDK_CONFIG_PREFIX /usr/local 00:25:44.974 #define SPDK_CONFIG_RAID5F 1 00:25:44.974 #undef SPDK_CONFIG_RBD 00:25:44.974 #define SPDK_CONFIG_RDMA 1 00:25:44.974 #define SPDK_CONFIG_RDMA_PROV verbs 00:25:44.974 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:25:44.974 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:25:44.974 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:25:44.974 #undef SPDK_CONFIG_SHARED 00:25:44.974 #undef SPDK_CONFIG_SMA 00:25:44.974 #define SPDK_CONFIG_TESTS 1 00:25:44.974 #undef SPDK_CONFIG_TSAN 00:25:44.974 #define SPDK_CONFIG_UBLK 1 00:25:44.974 #define SPDK_CONFIG_UBSAN 1 00:25:44.974 #define SPDK_CONFIG_UNIT_TESTS 1 00:25:44.974 #undef SPDK_CONFIG_URING 00:25:44.974 #define SPDK_CONFIG_URING_PATH 00:25:44.974 #undef SPDK_CONFIG_URING_ZNS 00:25:44.974 #undef SPDK_CONFIG_USDT 00:25:44.974 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:25:44.974 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:25:44.974 #undef SPDK_CONFIG_VFIO_USER 00:25:44.974 #define SPDK_CONFIG_VFIO_USER_DIR 00:25:44.974 #define SPDK_CONFIG_VHOST 1 00:25:44.974 #define SPDK_CONFIG_VIRTIO 1 00:25:44.974 #undef SPDK_CONFIG_VTUNE 00:25:44.974 #define SPDK_CONFIG_VTUNE_DIR 00:25:44.974 #define SPDK_CONFIG_WERROR 1 00:25:44.974 #define SPDK_CONFIG_WPDK_DIR 00:25:44.974 #undef SPDK_CONFIG_XNVME 00:25:44.974 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:25:44.974 05:23:03 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:25:44.974 05:23:03 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:44.974 05:23:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.974 05:23:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.974 05:23:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.974 05:23:03 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:44.974 05:23:03 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:44.974 05:23:03 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:44.974 05:23:03 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:44.974 05:23:03 -- paths/export.sh@6 -- # export PATH 00:25:44.974 05:23:03 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:44.974 05:23:03 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:44.974 05:23:03 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:44.974 05:23:03 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:44.974 05:23:03 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:44.974 05:23:03 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:25:44.974 05:23:03 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:25:44.974 05:23:03 -- pm/common@16 -- # TEST_TAG=N/A 00:25:44.974 05:23:03 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:25:44.974 05:23:03 -- common/autotest_common.sh@52 -- # : 1 00:25:44.974 05:23:03 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:25:44.974 05:23:03 -- common/autotest_common.sh@56 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:25:44.974 05:23:03 -- common/autotest_common.sh@58 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:25:44.974 05:23:03 -- common/autotest_common.sh@60 -- # : 1 00:25:44.974 05:23:03 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:25:44.974 05:23:03 -- common/autotest_common.sh@62 -- # : 1 00:25:44.974 05:23:03 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:25:44.974 05:23:03 -- common/autotest_common.sh@64 -- # : 00:25:44.974 05:23:03 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:25:44.974 05:23:03 -- common/autotest_common.sh@66 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:25:44.974 05:23:03 -- common/autotest_common.sh@68 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:25:44.974 05:23:03 -- common/autotest_common.sh@70 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:25:44.974 05:23:03 -- common/autotest_common.sh@72 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:25:44.974 05:23:03 -- common/autotest_common.sh@74 -- # : 1 00:25:44.974 05:23:03 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:25:44.974 05:23:03 -- common/autotest_common.sh@76 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:25:44.974 05:23:03 -- common/autotest_common.sh@78 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:25:44.974 05:23:03 -- common/autotest_common.sh@80 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:25:44.974 05:23:03 -- common/autotest_common.sh@82 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:25:44.974 05:23:03 -- common/autotest_common.sh@84 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:25:44.974 05:23:03 -- common/autotest_common.sh@86 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:25:44.974 05:23:03 -- common/autotest_common.sh@88 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:25:44.974 05:23:03 -- common/autotest_common.sh@90 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:25:44.974 05:23:03 -- common/autotest_common.sh@92 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:25:44.974 05:23:03 -- common/autotest_common.sh@94 -- # : 0 00:25:44.974 05:23:03 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:25:44.974 05:23:03 -- common/autotest_common.sh@96 -- # : rdma 00:25:44.975 05:23:03 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:25:44.975 05:23:03 -- common/autotest_common.sh@98 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:25:44.975 05:23:03 -- common/autotest_common.sh@100 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:25:44.975 05:23:03 -- common/autotest_common.sh@102 -- # : 1 00:25:44.975 05:23:03 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:25:44.975 05:23:03 -- common/autotest_common.sh@104 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:25:44.975 05:23:03 -- common/autotest_common.sh@106 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:25:44.975 05:23:03 -- common/autotest_common.sh@108 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:25:44.975 05:23:03 -- common/autotest_common.sh@110 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:25:44.975 05:23:03 -- common/autotest_common.sh@112 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:25:44.975 05:23:03 -- common/autotest_common.sh@114 -- # : 1 00:25:44.975 05:23:03 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:25:44.975 05:23:03 -- common/autotest_common.sh@116 -- # : 1 00:25:44.975 05:23:03 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:25:44.975 05:23:03 -- common/autotest_common.sh@118 -- # : 00:25:44.975 05:23:03 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:25:44.975 05:23:03 -- common/autotest_common.sh@120 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:25:44.975 05:23:03 -- common/autotest_common.sh@122 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:25:44.975 05:23:03 -- common/autotest_common.sh@124 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:25:44.975 05:23:03 -- common/autotest_common.sh@126 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:25:44.975 05:23:03 -- common/autotest_common.sh@128 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:25:44.975 05:23:03 -- common/autotest_common.sh@130 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:25:44.975 05:23:03 -- common/autotest_common.sh@132 -- # : 00:25:44.975 05:23:03 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:25:44.975 05:23:03 -- common/autotest_common.sh@134 -- # : true 00:25:44.975 05:23:03 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:25:44.975 05:23:03 -- common/autotest_common.sh@136 -- # : 1 00:25:44.975 05:23:03 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:25:44.975 05:23:03 -- common/autotest_common.sh@138 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:25:44.975 05:23:03 -- common/autotest_common.sh@140 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:25:44.975 05:23:03 -- common/autotest_common.sh@142 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:25:44.975 05:23:03 -- common/autotest_common.sh@144 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:25:44.975 05:23:03 -- common/autotest_common.sh@146 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:25:44.975 05:23:03 -- common/autotest_common.sh@148 -- # : 00:25:44.975 05:23:03 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:25:44.975 05:23:03 -- common/autotest_common.sh@150 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:25:44.975 05:23:03 -- common/autotest_common.sh@152 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:25:44.975 05:23:03 -- common/autotest_common.sh@154 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:25:44.975 05:23:03 -- common/autotest_common.sh@156 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:25:44.975 05:23:03 -- common/autotest_common.sh@158 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:25:44.975 05:23:03 -- common/autotest_common.sh@160 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:25:44.975 05:23:03 -- common/autotest_common.sh@163 -- # : 00:25:44.975 05:23:03 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:25:44.975 05:23:03 -- common/autotest_common.sh@165 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:25:44.975 05:23:03 -- common/autotest_common.sh@167 -- # : 0 00:25:44.975 05:23:03 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:25:44.975 05:23:03 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:44.975 05:23:03 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:44.975 05:23:03 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:25:44.975 05:23:03 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:25:44.975 05:23:03 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:44.975 05:23:03 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:44.975 05:23:03 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:44.975 05:23:03 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:44.975 05:23:03 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:25:44.975 05:23:03 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:25:44.975 05:23:03 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:44.975 05:23:03 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:44.975 05:23:03 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:25:44.975 05:23:03 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:25:44.975 05:23:03 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:44.975 05:23:03 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:44.975 05:23:03 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:44.975 05:23:03 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:44.975 05:23:03 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:25:44.975 05:23:03 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:25:44.975 05:23:03 -- common/autotest_common.sh@196 -- # cat 00:25:44.975 05:23:03 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:25:44.975 05:23:03 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:44.975 05:23:03 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:44.975 05:23:03 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:44.975 05:23:03 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:44.975 05:23:03 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:25:44.975 05:23:03 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:25:44.975 05:23:03 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:44.975 05:23:03 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:44.975 05:23:03 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:44.975 05:23:03 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:44.975 05:23:03 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:25:44.975 05:23:03 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:25:44.975 05:23:03 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:44.975 05:23:03 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:44.975 05:23:03 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:44.975 05:23:03 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:44.975 05:23:03 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:44.975 05:23:03 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:44.975 05:23:03 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:25:44.975 05:23:03 -- common/autotest_common.sh@249 -- # export valgrind= 00:25:44.975 05:23:03 -- common/autotest_common.sh@249 -- # valgrind= 00:25:44.975 05:23:03 -- common/autotest_common.sh@255 -- # uname -s 00:25:44.975 05:23:03 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:25:44.975 05:23:03 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:25:44.975 05:23:03 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:25:44.975 05:23:03 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:25:44.975 05:23:03 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:25:44.975 05:23:03 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:25:44.975 05:23:03 -- common/autotest_common.sh@265 -- # MAKE=make 00:25:44.975 05:23:03 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:25:44.975 05:23:03 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:25:44.975 05:23:03 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:25:44.975 05:23:03 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:25:44.976 05:23:03 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:25:44.976 05:23:03 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:25:44.976 05:23:03 -- common/autotest_common.sh@309 -- # [[ -z 87720 ]] 00:25:44.976 05:23:03 -- common/autotest_common.sh@309 -- # kill -0 87720 00:25:44.976 05:23:03 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:25:44.976 05:23:03 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:25:44.976 05:23:03 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:25:44.976 05:23:03 -- common/autotest_common.sh@322 -- # local mount target_dir 00:25:44.976 05:23:03 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:25:44.976 05:23:03 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:25:44.976 05:23:03 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:25:44.976 05:23:03 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:25:44.976 05:23:03 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.oy0wbk 00:25:44.976 05:23:03 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:25:44.976 05:23:03 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:25:44.976 05:23:03 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:25:44.976 05:23:03 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.oy0wbk/tests/interrupt /tmp/spdk.oy0wbk 00:25:44.976 05:23:03 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:25:44.976 05:23:03 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:44.976 05:23:03 -- common/autotest_common.sh@318 -- # df -T 00:25:44.976 05:23:03 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249308672 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254023168 00:25:44.976 05:23:03 -- common/autotest_common.sh@354 -- # uses["$mount"]=4714496 00:25:44.976 05:23:03 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # avails["$mount"]=10286333952 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # sizes["$mount"]=19681529856 00:25:44.976 05:23:03 -- common/autotest_common.sh@354 -- # uses["$mount"]=9378418688 00:25:44.976 05:23:03 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # avails["$mount"]=6268854272 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6270111744 00:25:44.976 05:23:03 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:25:44.976 05:23:03 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:25:44.976 05:23:03 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:25:44.976 05:23:03 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda16 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # avails["$mount"]=777306112 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # sizes["$mount"]=923156480 00:25:44.976 05:23:03 -- common/autotest_common.sh@354 -- # uses["$mount"]=81207296 00:25:44.976 05:23:03 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # avails["$mount"]=103000064 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:25:44.976 05:23:03 -- common/autotest_common.sh@354 -- # uses["$mount"]=6395904 00:25:44.976 05:23:03 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254006784 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254019072 00:25:44.976 05:23:03 -- common/autotest_common.sh@354 -- # uses["$mount"]=12288 00:25:44.976 05:23:03 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:25:44.976 05:23:03 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # avails["$mount"]=98727628800 00:25:44.976 05:23:03 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:25:44.976 05:23:03 -- common/autotest_common.sh@354 -- # uses["$mount"]=975151104 00:25:44.976 05:23:03 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:44.976 05:23:03 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:25:44.976 * Looking for test storage... 00:25:44.976 05:23:03 -- common/autotest_common.sh@359 -- # local target_space new_size 00:25:44.976 05:23:03 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:25:44.976 05:23:03 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:44.976 05:23:03 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:25:44.976 05:23:04 -- common/autotest_common.sh@363 -- # mount=/ 00:25:44.976 05:23:04 -- common/autotest_common.sh@365 -- # target_space=10286333952 00:25:44.976 05:23:04 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:25:44.976 05:23:04 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:25:44.976 05:23:04 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:25:44.976 05:23:04 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:25:44.976 05:23:04 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:25:44.976 05:23:04 -- common/autotest_common.sh@372 -- # new_size=11593011200 00:25:44.976 05:23:04 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:25:44.976 05:23:04 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:44.976 05:23:04 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:44.976 05:23:04 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:44.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:44.976 05:23:04 -- common/autotest_common.sh@380 -- # return 0 00:25:44.976 05:23:04 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:25:44.976 05:23:04 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:25:44.976 05:23:04 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:25:44.976 05:23:04 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:25:44.976 05:23:04 -- common/autotest_common.sh@1672 -- # true 00:25:44.976 05:23:04 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:25:44.976 05:23:04 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:25:44.976 05:23:04 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:25:44.976 05:23:04 -- common/autotest_common.sh@27 -- # exec 00:25:44.976 05:23:04 -- common/autotest_common.sh@29 -- # exec 00:25:44.976 05:23:04 -- common/autotest_common.sh@31 -- # xtrace_restore 00:25:44.976 05:23:04 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:25:44.976 05:23:04 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:25:44.976 05:23:04 -- common/autotest_common.sh@18 -- # set -x 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:25:44.976 05:23:04 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:44.976 05:23:04 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:44.976 05:23:04 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=87759 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:44.976 05:23:04 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 87759 /var/tmp/spdk.sock 00:25:44.976 05:23:04 -- common/autotest_common.sh@819 -- # '[' -z 87759 ']' 00:25:44.976 05:23:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.976 05:23:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:44.976 05:23:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.976 05:23:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:44.976 05:23:04 -- common/autotest_common.sh@10 -- # set +x 00:25:44.977 [2024-07-26 05:23:04.049517] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:44.977 [2024-07-26 05:23:04.049827] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87759 ] 00:25:45.235 [2024-07-26 05:23:04.203954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:45.495 [2024-07-26 05:23:04.356552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.495 [2024-07-26 05:23:04.356648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.495 [2024-07-26 05:23:04.356667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.495 [2024-07-26 05:23:04.572556] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:46.062 05:23:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:46.062 05:23:04 -- common/autotest_common.sh@852 -- # return 0 00:25:46.062 05:23:04 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:25:46.062 05:23:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.062 05:23:04 -- common/autotest_common.sh@10 -- # set +x 00:25:46.062 05:23:04 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:25:46.062 05:23:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.062 05:23:04 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:25:46.062 "name": "app_thread", 00:25:46.062 "id": 1, 00:25:46.062 "active_pollers": [], 00:25:46.062 "timed_pollers": [ 00:25:46.062 { 00:25:46.062 "name": "rpc_subsystem_poll", 00:25:46.062 "id": 1, 00:25:46.062 "state": "waiting", 00:25:46.062 "run_count": 0, 00:25:46.062 "busy_count": 0, 00:25:46.062 "period_ticks": 8800000 00:25:46.062 } 00:25:46.062 ], 00:25:46.062 "paused_pollers": [] 00:25:46.062 }' 00:25:46.062 05:23:04 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:25:46.062 05:23:04 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:25:46.062 05:23:04 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:25:46.062 05:23:04 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:25:46.062 05:23:04 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:25:46.062 05:23:04 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:25:46.062 05:23:04 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:25:46.062 05:23:04 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:25:46.062 05:23:04 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:25:46.062 5000+0 records in 00:25:46.062 5000+0 records out 00:25:46.062 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0198944 s, 515 MB/s 00:25:46.062 05:23:04 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:25:46.320 AIO0 00:25:46.320 05:23:05 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:25:46.579 05:23:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.579 05:23:05 -- common/autotest_common.sh@10 -- # set +x 00:25:46.579 05:23:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:25:46.579 "name": "app_thread", 00:25:46.579 "id": 1, 00:25:46.579 "active_pollers": [], 00:25:46.579 "timed_pollers": [ 00:25:46.579 { 00:25:46.579 "name": "rpc_subsystem_poll", 00:25:46.579 "id": 1, 00:25:46.579 "state": "waiting", 00:25:46.579 "run_count": 0, 00:25:46.579 "busy_count": 0, 00:25:46.579 "period_ticks": 8800000 00:25:46.579 } 00:25:46.579 ], 00:25:46.579 "paused_pollers": [] 00:25:46.579 }' 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:25:46.579 05:23:05 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 87759 00:25:46.579 05:23:05 -- common/autotest_common.sh@926 -- # '[' -z 87759 ']' 00:25:46.579 05:23:05 -- common/autotest_common.sh@930 -- # kill -0 87759 00:25:46.579 05:23:05 -- common/autotest_common.sh@931 -- # uname 00:25:46.579 05:23:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:46.579 05:23:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87759 00:25:46.579 05:23:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:46.579 05:23:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:46.579 05:23:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87759' 00:25:46.579 killing process with pid 87759 00:25:46.579 05:23:05 -- common/autotest_common.sh@945 -- # kill 87759 00:25:46.579 05:23:05 -- common/autotest_common.sh@950 -- # wait 87759 00:25:47.963 05:23:06 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:25:47.963 05:23:06 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:25:47.963 00:25:47.963 real 0m2.896s 00:25:47.963 user 0m2.262s 00:25:47.963 sys 0m0.481s 00:25:47.963 ************************************ 00:25:47.963 END TEST reap_unregistered_poller 00:25:47.963 ************************************ 00:25:47.963 05:23:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.963 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:25:47.963 05:23:06 -- spdk/autotest.sh@204 -- # uname -s 00:25:47.963 05:23:06 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:25:47.963 05:23:06 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:25:47.963 05:23:06 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:25:47.963 05:23:06 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:25:47.963 05:23:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:47.963 05:23:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:47.963 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:25:47.963 ************************************ 00:25:47.963 START TEST spdk_dd 00:25:47.963 ************************************ 00:25:47.963 05:23:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:25:47.963 * Looking for test storage... 00:25:47.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:47.963 05:23:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:47.963 05:23:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.963 05:23:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.963 05:23:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.963 05:23:06 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:47.963 05:23:06 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:47.963 05:23:06 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:47.963 05:23:06 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:47.963 05:23:06 -- paths/export.sh@6 -- # export PATH 00:25:47.963 05:23:06 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:47.963 05:23:06 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:48.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:25:48.227 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:48.797 05:23:07 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:25:48.797 05:23:07 -- dd/dd.sh@11 -- # nvme_in_userspace 00:25:48.797 05:23:07 -- scripts/common.sh@311 -- # local bdf bdfs 00:25:48.797 05:23:07 -- scripts/common.sh@312 -- # local nvmes 00:25:48.797 05:23:07 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:25:48.797 05:23:07 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:48.797 05:23:07 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:25:48.797 05:23:07 -- scripts/common.sh@297 -- # local bdf= 00:25:48.797 05:23:07 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:25:48.797 05:23:07 -- scripts/common.sh@232 -- # local class 00:25:48.797 05:23:07 -- scripts/common.sh@233 -- # local subclass 00:25:48.797 05:23:07 -- scripts/common.sh@234 -- # local progif 00:25:48.797 05:23:07 -- scripts/common.sh@235 -- # printf %02x 1 00:25:48.797 05:23:07 -- scripts/common.sh@235 -- # class=01 00:25:48.797 05:23:07 -- scripts/common.sh@236 -- # printf %02x 8 00:25:48.797 05:23:07 -- scripts/common.sh@236 -- # subclass=08 00:25:48.797 05:23:07 -- scripts/common.sh@237 -- # printf %02x 2 00:25:48.797 05:23:07 -- scripts/common.sh@237 -- # progif=02 00:25:48.797 05:23:07 -- scripts/common.sh@239 -- # hash lspci 00:25:48.797 05:23:07 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:25:48.797 05:23:07 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:25:48.797 05:23:07 -- scripts/common.sh@242 -- # grep -i -- -p02 00:25:48.797 05:23:07 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:48.797 05:23:07 -- scripts/common.sh@244 -- # tr -d '"' 00:25:48.797 05:23:07 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:48.797 05:23:07 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:25:48.797 05:23:07 -- scripts/common.sh@15 -- # local i 00:25:48.797 05:23:07 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:25:48.797 05:23:07 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:25:48.797 05:23:07 -- scripts/common.sh@24 -- # return 0 00:25:48.797 05:23:07 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:25:48.797 05:23:07 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:25:48.798 05:23:07 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:25:48.798 05:23:07 -- scripts/common.sh@322 -- # uname -s 00:25:48.798 05:23:07 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:25:48.798 05:23:07 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:25:48.798 05:23:07 -- scripts/common.sh@327 -- # (( 1 )) 00:25:48.798 05:23:07 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:25:48.798 05:23:07 -- dd/dd.sh@13 -- # check_liburing 00:25:48.798 05:23:07 -- dd/common.sh@139 -- # local lib so 00:25:48.798 05:23:07 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:25:48.798 05:23:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:48.798 05:23:07 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:25:48.798 05:23:07 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.798 05:23:07 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:25:48.798 05:23:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:48.798 05:23:07 -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:25:48.798 05:23:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:48.798 05:23:07 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:25:48.798 05:23:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:48.798 05:23:07 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:25:48.798 05:23:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:48.798 05:23:07 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:25:48.798 05:23:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:48.798 05:23:07 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:25:48.798 05:23:07 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:25:48.798 * spdk_dd linked to liburing 00:25:48.798 05:23:07 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:25:48.798 05:23:07 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:25:48.798 05:23:07 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:25:48.798 05:23:07 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:25:48.798 05:23:07 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:25:48.798 05:23:07 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:25:48.798 05:23:07 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:25:48.798 05:23:07 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:25:48.798 05:23:07 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:25:48.798 05:23:07 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:25:48.798 05:23:07 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:25:48.798 05:23:07 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:25:48.798 05:23:07 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:25:48.798 05:23:07 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:25:48.798 05:23:07 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:25:48.798 05:23:07 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:25:48.798 05:23:07 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:25:48.798 05:23:07 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:25:48.798 05:23:07 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:25:48.798 05:23:07 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:25:48.798 05:23:07 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:48.798 05:23:07 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:25:48.798 05:23:07 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:25:48.798 05:23:07 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:25:48.798 05:23:07 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:25:48.798 05:23:07 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:25:48.798 05:23:07 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:25:48.798 05:23:07 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:25:48.798 05:23:07 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:25:48.798 05:23:07 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:25:48.798 05:23:07 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:25:48.798 05:23:07 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:25:48.798 05:23:07 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:25:48.798 05:23:07 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:25:48.798 05:23:07 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:25:48.798 05:23:07 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:25:48.798 05:23:07 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:25:48.798 05:23:07 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:25:48.798 05:23:07 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:25:48.798 05:23:07 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:25:48.798 05:23:07 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:25:48.798 05:23:07 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:25:48.798 05:23:07 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:25:48.798 05:23:07 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:25:48.798 05:23:07 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:25:48.798 05:23:07 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:25:48.798 05:23:07 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:25:48.798 05:23:07 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:25:48.798 05:23:07 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:25:48.798 05:23:07 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:25:48.798 05:23:07 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:25:48.798 05:23:07 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:25:48.798 05:23:07 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:25:48.798 05:23:07 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:25:48.798 05:23:07 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:25:48.798 05:23:07 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:25:48.798 05:23:07 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:25:48.798 05:23:07 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:25:48.798 05:23:07 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:25:48.798 05:23:07 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:25:48.798 05:23:07 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:25:48.798 05:23:07 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:25:48.798 05:23:07 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:25:48.798 05:23:07 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:25:48.798 05:23:07 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:25:48.798 05:23:07 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:25:48.798 05:23:07 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:25:48.798 05:23:07 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:25:48.798 05:23:07 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:25:48.798 05:23:07 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:25:48.798 05:23:07 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:25:48.798 05:23:07 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:25:48.798 05:23:07 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:25:48.798 05:23:07 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:25:48.798 05:23:07 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:25:48.798 05:23:07 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:25:48.798 05:23:07 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:25:48.798 05:23:07 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:25:48.798 05:23:07 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:25:48.798 05:23:07 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:25:48.798 05:23:07 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:25:48.798 05:23:07 -- dd/common.sh@149 -- # [[ n != y ]] 00:25:48.798 05:23:07 -- dd/common.sh@150 -- # printf '* spdk_dd built with liburing, but no liburing support requested?\n' 00:25:48.798 * spdk_dd built with liburing, but no liburing support requested? 00:25:48.798 05:23:07 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:25:48.798 05:23:07 -- dd/common.sh@156 -- # export liburing_in_use=1 00:25:48.798 05:23:07 -- dd/common.sh@156 -- # liburing_in_use=1 00:25:48.798 05:23:07 -- dd/common.sh@157 -- # return 0 00:25:48.798 05:23:07 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:25:48.798 05:23:07 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:25:48.798 05:23:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:48.798 05:23:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:48.798 05:23:07 -- common/autotest_common.sh@10 -- # set +x 00:25:48.798 ************************************ 00:25:48.798 START TEST spdk_dd_basic_rw 00:25:48.798 ************************************ 00:25:48.798 05:23:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:25:48.798 * Looking for test storage... 00:25:48.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:48.798 05:23:07 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:48.798 05:23:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.798 05:23:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.798 05:23:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.798 05:23:07 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:48.798 05:23:07 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:48.799 05:23:07 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:48.799 05:23:07 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:48.799 05:23:07 -- paths/export.sh@6 -- # export PATH 00:25:48.799 05:23:07 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:48.799 05:23:07 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:25:48.799 05:23:07 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:25:48.799 05:23:07 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:25:48.799 05:23:07 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:25:48.799 05:23:07 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:25:48.799 05:23:07 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:25:48.799 05:23:07 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:25:48.799 05:23:07 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:48.799 05:23:07 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:48.799 05:23:07 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:25:48.799 05:23:07 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:25:48.799 05:23:07 -- dd/common.sh@126 -- # mapfile -t id 00:25:48.799 05:23:07 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:25:49.061 05:23:08 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2249 Host Write Commands: 109 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:25:49.061 05:23:08 -- dd/common.sh@130 -- # lbaf=04 00:25:49.061 05:23:08 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2249 Host Write Commands: 109 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:25:49.061 05:23:08 -- dd/common.sh@132 -- # lbaf=4096 00:25:49.061 05:23:08 -- dd/common.sh@134 -- # echo 4096 00:25:49.061 05:23:08 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:25:49.061 05:23:08 -- dd/basic_rw.sh@96 -- # : 00:25:49.061 05:23:08 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:49.061 05:23:08 -- dd/basic_rw.sh@96 -- # gen_conf 00:25:49.061 05:23:08 -- dd/common.sh@31 -- # xtrace_disable 00:25:49.062 05:23:08 -- common/autotest_common.sh@10 -- # set +x 00:25:49.062 05:23:08 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:25:49.062 05:23:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:49.062 05:23:08 -- common/autotest_common.sh@10 -- # set +x 00:25:49.062 ************************************ 00:25:49.062 START TEST dd_bs_lt_native_bs 00:25:49.062 ************************************ 00:25:49.062 05:23:08 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:49.062 05:23:08 -- common/autotest_common.sh@640 -- # local es=0 00:25:49.062 05:23:08 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:49.062 05:23:08 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:49.062 { 00:25:49.062 "subsystems": [ 00:25:49.062 { 00:25:49.062 "subsystem": "bdev", 00:25:49.062 "config": [ 00:25:49.062 { 00:25:49.062 "params": { 00:25:49.062 "trtype": "pcie", 00:25:49.062 "traddr": "0000:00:06.0", 00:25:49.062 "name": "Nvme0" 00:25:49.062 }, 00:25:49.062 "method": "bdev_nvme_attach_controller" 00:25:49.062 }, 00:25:49.062 { 00:25:49.062 "method": "bdev_wait_for_examine" 00:25:49.062 } 00:25:49.062 ] 00:25:49.062 } 00:25:49.062 ] 00:25:49.062 } 00:25:49.062 05:23:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:49.062 05:23:08 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:49.062 05:23:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:49.062 05:23:08 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:49.062 05:23:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:49.062 05:23:08 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:49.062 05:23:08 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:49.062 05:23:08 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:49.327 [2024-07-26 05:23:08.185334] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:49.327 [2024-07-26 05:23:08.185489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88028 ] 00:25:49.327 [2024-07-26 05:23:08.360787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.588 [2024-07-26 05:23:08.596396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.847 [2024-07-26 05:23:08.895990] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:25:49.847 [2024-07-26 05:23:08.896076] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:50.415 [2024-07-26 05:23:09.290130] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:50.674 05:23:09 -- common/autotest_common.sh@643 -- # es=234 00:25:50.674 05:23:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:50.674 05:23:09 -- common/autotest_common.sh@652 -- # es=106 00:25:50.674 05:23:09 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:50.674 05:23:09 -- common/autotest_common.sh@660 -- # es=1 00:25:50.674 05:23:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:50.674 00:25:50.674 real 0m1.535s 00:25:50.674 user 0m1.230s 00:25:50.674 sys 0m0.223s 00:25:50.674 ************************************ 00:25:50.674 END TEST dd_bs_lt_native_bs 00:25:50.674 ************************************ 00:25:50.674 05:23:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:50.674 05:23:09 -- common/autotest_common.sh@10 -- # set +x 00:25:50.674 05:23:09 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:25:50.674 05:23:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:50.674 05:23:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:50.674 05:23:09 -- common/autotest_common.sh@10 -- # set +x 00:25:50.674 ************************************ 00:25:50.674 START TEST dd_rw 00:25:50.674 ************************************ 00:25:50.674 05:23:09 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:25:50.674 05:23:09 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:25:50.674 05:23:09 -- dd/basic_rw.sh@12 -- # local count size 00:25:50.674 05:23:09 -- dd/basic_rw.sh@13 -- # local qds bss 00:25:50.674 05:23:09 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:25:50.674 05:23:09 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:50.674 05:23:09 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:50.674 05:23:09 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:50.674 05:23:09 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:50.674 05:23:09 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:50.674 05:23:09 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:50.674 05:23:09 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:50.675 05:23:09 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:50.675 05:23:09 -- dd/basic_rw.sh@23 -- # count=15 00:25:50.675 05:23:09 -- dd/basic_rw.sh@24 -- # count=15 00:25:50.675 05:23:09 -- dd/basic_rw.sh@25 -- # size=61440 00:25:50.675 05:23:09 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:50.675 05:23:09 -- dd/common.sh@98 -- # xtrace_disable 00:25:50.675 05:23:09 -- common/autotest_common.sh@10 -- # set +x 00:25:51.269 05:23:10 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:25:51.269 05:23:10 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:51.269 05:23:10 -- dd/common.sh@31 -- # xtrace_disable 00:25:51.269 05:23:10 -- common/autotest_common.sh@10 -- # set +x 00:25:51.269 { 00:25:51.269 "subsystems": [ 00:25:51.269 { 00:25:51.269 "subsystem": "bdev", 00:25:51.269 "config": [ 00:25:51.269 { 00:25:51.269 "params": { 00:25:51.269 "trtype": "pcie", 00:25:51.269 "traddr": "0000:00:06.0", 00:25:51.269 "name": "Nvme0" 00:25:51.269 }, 00:25:51.269 "method": "bdev_nvme_attach_controller" 00:25:51.269 }, 00:25:51.269 { 00:25:51.269 "method": "bdev_wait_for_examine" 00:25:51.269 } 00:25:51.269 ] 00:25:51.269 } 00:25:51.269 ] 00:25:51.269 } 00:25:51.269 [2024-07-26 05:23:10.301932] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:51.269 [2024-07-26 05:23:10.302092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88071 ] 00:25:51.528 [2024-07-26 05:23:10.469691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.528 [2024-07-26 05:23:10.625199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.032  Copying: 60/60 [kB] (average 19 MBps) 00:25:53.032 00:25:53.032 05:23:11 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:25:53.032 05:23:11 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:53.032 05:23:11 -- dd/common.sh@31 -- # xtrace_disable 00:25:53.032 05:23:11 -- common/autotest_common.sh@10 -- # set +x 00:25:53.032 { 00:25:53.032 "subsystems": [ 00:25:53.032 { 00:25:53.032 "subsystem": "bdev", 00:25:53.032 "config": [ 00:25:53.032 { 00:25:53.032 "params": { 00:25:53.032 "trtype": "pcie", 00:25:53.032 "traddr": "0000:00:06.0", 00:25:53.032 "name": "Nvme0" 00:25:53.032 }, 00:25:53.032 "method": "bdev_nvme_attach_controller" 00:25:53.032 }, 00:25:53.032 { 00:25:53.032 "method": "bdev_wait_for_examine" 00:25:53.032 } 00:25:53.032 ] 00:25:53.032 } 00:25:53.032 ] 00:25:53.032 } 00:25:53.032 [2024-07-26 05:23:11.892520] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:53.032 [2024-07-26 05:23:11.892666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88097 ] 00:25:53.032 [2024-07-26 05:23:12.056556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.290 [2024-07-26 05:23:12.203622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.483  Copying: 60/60 [kB] (average 19 MBps) 00:25:54.483 00:25:54.483 05:23:13 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:54.483 05:23:13 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:54.483 05:23:13 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:54.483 05:23:13 -- dd/common.sh@11 -- # local nvme_ref= 00:25:54.483 05:23:13 -- dd/common.sh@12 -- # local size=61440 00:25:54.483 05:23:13 -- dd/common.sh@14 -- # local bs=1048576 00:25:54.483 05:23:13 -- dd/common.sh@15 -- # local count=1 00:25:54.483 05:23:13 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:54.483 05:23:13 -- dd/common.sh@18 -- # gen_conf 00:25:54.483 05:23:13 -- dd/common.sh@31 -- # xtrace_disable 00:25:54.483 05:23:13 -- common/autotest_common.sh@10 -- # set +x 00:25:54.483 { 00:25:54.483 "subsystems": [ 00:25:54.483 { 00:25:54.483 "subsystem": "bdev", 00:25:54.483 "config": [ 00:25:54.483 { 00:25:54.483 "params": { 00:25:54.483 "trtype": "pcie", 00:25:54.483 "traddr": "0000:00:06.0", 00:25:54.483 "name": "Nvme0" 00:25:54.483 }, 00:25:54.483 "method": "bdev_nvme_attach_controller" 00:25:54.483 }, 00:25:54.483 { 00:25:54.483 "method": "bdev_wait_for_examine" 00:25:54.483 } 00:25:54.483 ] 00:25:54.483 } 00:25:54.483 ] 00:25:54.483 } 00:25:54.483 [2024-07-26 05:23:13.305123] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:54.483 [2024-07-26 05:23:13.305254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88120 ] 00:25:54.483 [2024-07-26 05:23:13.457318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.741 [2024-07-26 05:23:13.610677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.936  Copying: 1024/1024 [kB] (average 500 MBps) 00:25:55.936 00:25:55.936 05:23:14 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:55.936 05:23:14 -- dd/basic_rw.sh@23 -- # count=15 00:25:55.936 05:23:14 -- dd/basic_rw.sh@24 -- # count=15 00:25:55.936 05:23:14 -- dd/basic_rw.sh@25 -- # size=61440 00:25:55.936 05:23:14 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:55.936 05:23:14 -- dd/common.sh@98 -- # xtrace_disable 00:25:55.936 05:23:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.503 05:23:15 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:25:56.503 05:23:15 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:56.503 05:23:15 -- dd/common.sh@31 -- # xtrace_disable 00:25:56.503 05:23:15 -- common/autotest_common.sh@10 -- # set +x 00:25:56.503 { 00:25:56.503 "subsystems": [ 00:25:56.503 { 00:25:56.503 "subsystem": "bdev", 00:25:56.503 "config": [ 00:25:56.503 { 00:25:56.503 "params": { 00:25:56.503 "trtype": "pcie", 00:25:56.503 "traddr": "0000:00:06.0", 00:25:56.503 "name": "Nvme0" 00:25:56.503 }, 00:25:56.503 "method": "bdev_nvme_attach_controller" 00:25:56.503 }, 00:25:56.503 { 00:25:56.503 "method": "bdev_wait_for_examine" 00:25:56.503 } 00:25:56.503 ] 00:25:56.503 } 00:25:56.503 ] 00:25:56.503 } 00:25:56.503 [2024-07-26 05:23:15.402984] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:56.503 [2024-07-26 05:23:15.403179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88146 ] 00:25:56.503 [2024-07-26 05:23:15.572179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.762 [2024-07-26 05:23:15.724748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.959  Copying: 60/60 [kB] (average 58 MBps) 00:25:57.959 00:25:57.959 05:23:16 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:25:57.959 05:23:16 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:57.959 05:23:16 -- dd/common.sh@31 -- # xtrace_disable 00:25:57.959 05:23:16 -- common/autotest_common.sh@10 -- # set +x 00:25:57.959 { 00:25:57.959 "subsystems": [ 00:25:57.959 { 00:25:57.959 "subsystem": "bdev", 00:25:57.959 "config": [ 00:25:57.959 { 00:25:57.959 "params": { 00:25:57.959 "trtype": "pcie", 00:25:57.959 "traddr": "0000:00:06.0", 00:25:57.959 "name": "Nvme0" 00:25:57.959 }, 00:25:57.959 "method": "bdev_nvme_attach_controller" 00:25:57.959 }, 00:25:57.959 { 00:25:57.959 "method": "bdev_wait_for_examine" 00:25:57.959 } 00:25:57.959 ] 00:25:57.959 } 00:25:57.959 ] 00:25:57.959 } 00:25:57.959 [2024-07-26 05:23:16.834195] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:57.959 [2024-07-26 05:23:16.834348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88175 ] 00:25:57.959 [2024-07-26 05:23:17.005031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.218 [2024-07-26 05:23:17.153638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.414  Copying: 60/60 [kB] (average 58 MBps) 00:25:59.414 00:25:59.414 05:23:18 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:59.414 05:23:18 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:59.414 05:23:18 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:59.414 05:23:18 -- dd/common.sh@11 -- # local nvme_ref= 00:25:59.414 05:23:18 -- dd/common.sh@12 -- # local size=61440 00:25:59.414 05:23:18 -- dd/common.sh@14 -- # local bs=1048576 00:25:59.414 05:23:18 -- dd/common.sh@15 -- # local count=1 00:25:59.414 05:23:18 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:59.414 05:23:18 -- dd/common.sh@18 -- # gen_conf 00:25:59.414 05:23:18 -- dd/common.sh@31 -- # xtrace_disable 00:25:59.414 05:23:18 -- common/autotest_common.sh@10 -- # set +x 00:25:59.414 { 00:25:59.414 "subsystems": [ 00:25:59.414 { 00:25:59.414 "subsystem": "bdev", 00:25:59.414 "config": [ 00:25:59.414 { 00:25:59.414 "params": { 00:25:59.414 "trtype": "pcie", 00:25:59.414 "traddr": "0000:00:06.0", 00:25:59.414 "name": "Nvme0" 00:25:59.414 }, 00:25:59.414 "method": "bdev_nvme_attach_controller" 00:25:59.414 }, 00:25:59.414 { 00:25:59.414 "method": "bdev_wait_for_examine" 00:25:59.414 } 00:25:59.414 ] 00:25:59.414 } 00:25:59.414 ] 00:25:59.414 } 00:25:59.414 [2024-07-26 05:23:18.421757] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:59.414 [2024-07-26 05:23:18.421908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88196 ] 00:25:59.672 [2024-07-26 05:23:18.587895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.672 [2024-07-26 05:23:18.736934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.868  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:00.868 00:26:00.868 05:23:19 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:00.868 05:23:19 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:00.868 05:23:19 -- dd/basic_rw.sh@23 -- # count=7 00:26:00.868 05:23:19 -- dd/basic_rw.sh@24 -- # count=7 00:26:00.868 05:23:19 -- dd/basic_rw.sh@25 -- # size=57344 00:26:00.868 05:23:19 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:00.868 05:23:19 -- dd/common.sh@98 -- # xtrace_disable 00:26:00.868 05:23:19 -- common/autotest_common.sh@10 -- # set +x 00:26:01.435 05:23:20 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:26:01.435 05:23:20 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:01.435 05:23:20 -- dd/common.sh@31 -- # xtrace_disable 00:26:01.435 05:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:01.435 { 00:26:01.435 "subsystems": [ 00:26:01.435 { 00:26:01.435 "subsystem": "bdev", 00:26:01.435 "config": [ 00:26:01.435 { 00:26:01.436 "params": { 00:26:01.436 "trtype": "pcie", 00:26:01.436 "traddr": "0000:00:06.0", 00:26:01.436 "name": "Nvme0" 00:26:01.436 }, 00:26:01.436 "method": "bdev_nvme_attach_controller" 00:26:01.436 }, 00:26:01.436 { 00:26:01.436 "method": "bdev_wait_for_examine" 00:26:01.436 } 00:26:01.436 ] 00:26:01.436 } 00:26:01.436 ] 00:26:01.436 } 00:26:01.436 [2024-07-26 05:23:20.366618] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:01.436 [2024-07-26 05:23:20.366766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88221 ] 00:26:01.436 [2024-07-26 05:23:20.539982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.695 [2024-07-26 05:23:20.751314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.896  Copying: 56/56 [kB] (average 54 MBps) 00:26:02.896 00:26:02.896 05:23:21 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:26:02.896 05:23:21 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:02.896 05:23:21 -- dd/common.sh@31 -- # xtrace_disable 00:26:02.896 05:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:02.896 { 00:26:02.896 "subsystems": [ 00:26:02.896 { 00:26:02.896 "subsystem": "bdev", 00:26:02.896 "config": [ 00:26:02.896 { 00:26:02.896 "params": { 00:26:02.896 "trtype": "pcie", 00:26:02.896 "traddr": "0000:00:06.0", 00:26:02.896 "name": "Nvme0" 00:26:02.896 }, 00:26:02.896 "method": "bdev_nvme_attach_controller" 00:26:02.896 }, 00:26:02.896 { 00:26:02.896 "method": "bdev_wait_for_examine" 00:26:02.896 } 00:26:02.896 ] 00:26:02.896 } 00:26:02.896 ] 00:26:02.896 } 00:26:03.155 [2024-07-26 05:23:22.006790] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:03.155 [2024-07-26 05:23:22.006940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88251 ] 00:26:03.155 [2024-07-26 05:23:22.176271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.414 [2024-07-26 05:23:22.324893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.617  Copying: 56/56 [kB] (average 54 MBps) 00:26:04.617 00:26:04.617 05:23:23 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:04.617 05:23:23 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:26:04.617 05:23:23 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:04.617 05:23:23 -- dd/common.sh@11 -- # local nvme_ref= 00:26:04.617 05:23:23 -- dd/common.sh@12 -- # local size=57344 00:26:04.617 05:23:23 -- dd/common.sh@14 -- # local bs=1048576 00:26:04.617 05:23:23 -- dd/common.sh@15 -- # local count=1 00:26:04.617 05:23:23 -- dd/common.sh@18 -- # gen_conf 00:26:04.617 05:23:23 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:04.617 05:23:23 -- dd/common.sh@31 -- # xtrace_disable 00:26:04.617 05:23:23 -- common/autotest_common.sh@10 -- # set +x 00:26:04.617 { 00:26:04.617 "subsystems": [ 00:26:04.617 { 00:26:04.617 "subsystem": "bdev", 00:26:04.617 "config": [ 00:26:04.617 { 00:26:04.617 "params": { 00:26:04.617 "trtype": "pcie", 00:26:04.617 "traddr": "0000:00:06.0", 00:26:04.617 "name": "Nvme0" 00:26:04.617 }, 00:26:04.617 "method": "bdev_nvme_attach_controller" 00:26:04.617 }, 00:26:04.617 { 00:26:04.617 "method": "bdev_wait_for_examine" 00:26:04.617 } 00:26:04.617 ] 00:26:04.617 } 00:26:04.617 ] 00:26:04.617 } 00:26:04.617 [2024-07-26 05:23:23.434773] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:04.617 [2024-07-26 05:23:23.434925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88271 ] 00:26:04.617 [2024-07-26 05:23:23.602243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.877 [2024-07-26 05:23:23.756918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.072  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:06.072 00:26:06.072 05:23:24 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:06.072 05:23:24 -- dd/basic_rw.sh@23 -- # count=7 00:26:06.072 05:23:24 -- dd/basic_rw.sh@24 -- # count=7 00:26:06.072 05:23:24 -- dd/basic_rw.sh@25 -- # size=57344 00:26:06.072 05:23:24 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:06.073 05:23:24 -- dd/common.sh@98 -- # xtrace_disable 00:26:06.073 05:23:24 -- common/autotest_common.sh@10 -- # set +x 00:26:06.654 05:23:25 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:26:06.654 05:23:25 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:06.654 05:23:25 -- dd/common.sh@31 -- # xtrace_disable 00:26:06.654 05:23:25 -- common/autotest_common.sh@10 -- # set +x 00:26:06.654 { 00:26:06.654 "subsystems": [ 00:26:06.654 { 00:26:06.654 "subsystem": "bdev", 00:26:06.654 "config": [ 00:26:06.654 { 00:26:06.654 "params": { 00:26:06.654 "trtype": "pcie", 00:26:06.654 "traddr": "0000:00:06.0", 00:26:06.654 "name": "Nvme0" 00:26:06.654 }, 00:26:06.654 "method": "bdev_nvme_attach_controller" 00:26:06.654 }, 00:26:06.654 { 00:26:06.654 "method": "bdev_wait_for_examine" 00:26:06.654 } 00:26:06.654 ] 00:26:06.654 } 00:26:06.654 ] 00:26:06.654 } 00:26:06.654 [2024-07-26 05:23:25.507883] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:06.654 [2024-07-26 05:23:25.508047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88300 ] 00:26:06.654 [2024-07-26 05:23:25.675696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.947 [2024-07-26 05:23:25.833732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.774  Copying: 56/56 [kB] (average 54 MBps) 00:26:07.774 00:26:08.033 05:23:26 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:26:08.033 05:23:26 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:08.033 05:23:26 -- dd/common.sh@31 -- # xtrace_disable 00:26:08.033 05:23:26 -- common/autotest_common.sh@10 -- # set +x 00:26:08.033 { 00:26:08.033 "subsystems": [ 00:26:08.033 { 00:26:08.033 "subsystem": "bdev", 00:26:08.033 "config": [ 00:26:08.033 { 00:26:08.033 "params": { 00:26:08.033 "trtype": "pcie", 00:26:08.033 "traddr": "0000:00:06.0", 00:26:08.033 "name": "Nvme0" 00:26:08.033 }, 00:26:08.033 "method": "bdev_nvme_attach_controller" 00:26:08.033 }, 00:26:08.033 { 00:26:08.033 "method": "bdev_wait_for_examine" 00:26:08.033 } 00:26:08.033 ] 00:26:08.033 } 00:26:08.033 ] 00:26:08.033 } 00:26:08.033 [2024-07-26 05:23:26.945655] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:08.033 [2024-07-26 05:23:26.945809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88326 ] 00:26:08.033 [2024-07-26 05:23:27.115166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.292 [2024-07-26 05:23:27.265098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.487  Copying: 56/56 [kB] (average 54 MBps) 00:26:09.487 00:26:09.487 05:23:28 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:09.487 05:23:28 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:26:09.487 05:23:28 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:09.487 05:23:28 -- dd/common.sh@11 -- # local nvme_ref= 00:26:09.487 05:23:28 -- dd/common.sh@12 -- # local size=57344 00:26:09.487 05:23:28 -- dd/common.sh@14 -- # local bs=1048576 00:26:09.487 05:23:28 -- dd/common.sh@15 -- # local count=1 00:26:09.487 05:23:28 -- dd/common.sh@18 -- # gen_conf 00:26:09.487 05:23:28 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:09.487 05:23:28 -- dd/common.sh@31 -- # xtrace_disable 00:26:09.487 05:23:28 -- common/autotest_common.sh@10 -- # set +x 00:26:09.487 { 00:26:09.487 "subsystems": [ 00:26:09.487 { 00:26:09.487 "subsystem": "bdev", 00:26:09.487 "config": [ 00:26:09.487 { 00:26:09.487 "params": { 00:26:09.487 "trtype": "pcie", 00:26:09.487 "traddr": "0000:00:06.0", 00:26:09.487 "name": "Nvme0" 00:26:09.487 }, 00:26:09.487 "method": "bdev_nvme_attach_controller" 00:26:09.487 }, 00:26:09.487 { 00:26:09.487 "method": "bdev_wait_for_examine" 00:26:09.487 } 00:26:09.487 ] 00:26:09.487 } 00:26:09.487 ] 00:26:09.487 } 00:26:09.487 [2024-07-26 05:23:28.531727] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:09.487 [2024-07-26 05:23:28.531903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88346 ] 00:26:09.746 [2024-07-26 05:23:28.699418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.005 [2024-07-26 05:23:28.857446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.831  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:10.831 00:26:10.831 05:23:29 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:10.831 05:23:29 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:10.831 05:23:29 -- dd/basic_rw.sh@23 -- # count=3 00:26:10.831 05:23:29 -- dd/basic_rw.sh@24 -- # count=3 00:26:10.831 05:23:29 -- dd/basic_rw.sh@25 -- # size=49152 00:26:10.831 05:23:29 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:26:10.831 05:23:29 -- dd/common.sh@98 -- # xtrace_disable 00:26:10.831 05:23:29 -- common/autotest_common.sh@10 -- # set +x 00:26:11.398 05:23:30 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:26:11.398 05:23:30 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:11.398 05:23:30 -- dd/common.sh@31 -- # xtrace_disable 00:26:11.398 05:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:11.398 { 00:26:11.398 "subsystems": [ 00:26:11.398 { 00:26:11.398 "subsystem": "bdev", 00:26:11.398 "config": [ 00:26:11.398 { 00:26:11.398 "params": { 00:26:11.398 "trtype": "pcie", 00:26:11.398 "traddr": "0000:00:06.0", 00:26:11.398 "name": "Nvme0" 00:26:11.398 }, 00:26:11.398 "method": "bdev_nvme_attach_controller" 00:26:11.398 }, 00:26:11.398 { 00:26:11.398 "method": "bdev_wait_for_examine" 00:26:11.398 } 00:26:11.398 ] 00:26:11.398 } 00:26:11.398 ] 00:26:11.398 } 00:26:11.398 [2024-07-26 05:23:30.401221] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:11.398 [2024-07-26 05:23:30.401545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88375 ] 00:26:11.657 [2024-07-26 05:23:30.569454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.657 [2024-07-26 05:23:30.716623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.852  Copying: 48/48 [kB] (average 46 MBps) 00:26:12.852 00:26:12.852 05:23:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:26:12.852 05:23:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:12.852 05:23:31 -- dd/common.sh@31 -- # xtrace_disable 00:26:12.852 05:23:31 -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 { 00:26:12.852 "subsystems": [ 00:26:12.852 { 00:26:12.852 "subsystem": "bdev", 00:26:12.852 "config": [ 00:26:12.852 { 00:26:12.852 "params": { 00:26:12.852 "trtype": "pcie", 00:26:12.852 "traddr": "0000:00:06.0", 00:26:12.852 "name": "Nvme0" 00:26:12.852 }, 00:26:12.852 "method": "bdev_nvme_attach_controller" 00:26:12.852 }, 00:26:12.852 { 00:26:12.852 "method": "bdev_wait_for_examine" 00:26:12.852 } 00:26:12.852 ] 00:26:12.852 } 00:26:12.852 ] 00:26:12.852 } 00:26:13.111 [2024-07-26 05:23:31.995630] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:13.111 [2024-07-26 05:23:31.995974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88401 ] 00:26:13.111 [2024-07-26 05:23:32.164745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.370 [2024-07-26 05:23:32.315098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.566  Copying: 48/48 [kB] (average 46 MBps) 00:26:14.566 00:26:14.566 05:23:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:14.566 05:23:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:26:14.566 05:23:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:14.566 05:23:33 -- dd/common.sh@11 -- # local nvme_ref= 00:26:14.566 05:23:33 -- dd/common.sh@12 -- # local size=49152 00:26:14.566 05:23:33 -- dd/common.sh@14 -- # local bs=1048576 00:26:14.566 05:23:33 -- dd/common.sh@15 -- # local count=1 00:26:14.566 05:23:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:14.566 05:23:33 -- dd/common.sh@18 -- # gen_conf 00:26:14.566 05:23:33 -- dd/common.sh@31 -- # xtrace_disable 00:26:14.566 05:23:33 -- common/autotest_common.sh@10 -- # set +x 00:26:14.566 { 00:26:14.566 "subsystems": [ 00:26:14.566 { 00:26:14.566 "subsystem": "bdev", 00:26:14.566 "config": [ 00:26:14.566 { 00:26:14.566 "params": { 00:26:14.566 "trtype": "pcie", 00:26:14.566 "traddr": "0000:00:06.0", 00:26:14.566 "name": "Nvme0" 00:26:14.566 }, 00:26:14.566 "method": "bdev_nvme_attach_controller" 00:26:14.566 }, 00:26:14.566 { 00:26:14.566 "method": "bdev_wait_for_examine" 00:26:14.566 } 00:26:14.566 ] 00:26:14.566 } 00:26:14.566 ] 00:26:14.566 } 00:26:14.566 [2024-07-26 05:23:33.506594] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:14.566 [2024-07-26 05:23:33.506748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88421 ] 00:26:14.824 [2024-07-26 05:23:33.677154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.824 [2024-07-26 05:23:33.830952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.016  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:16.016 00:26:16.016 05:23:35 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:16.016 05:23:35 -- dd/basic_rw.sh@23 -- # count=3 00:26:16.016 05:23:35 -- dd/basic_rw.sh@24 -- # count=3 00:26:16.016 05:23:35 -- dd/basic_rw.sh@25 -- # size=49152 00:26:16.016 05:23:35 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:26:16.016 05:23:35 -- dd/common.sh@98 -- # xtrace_disable 00:26:16.016 05:23:35 -- common/autotest_common.sh@10 -- # set +x 00:26:16.583 05:23:35 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:26:16.583 05:23:35 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:16.583 05:23:35 -- dd/common.sh@31 -- # xtrace_disable 00:26:16.583 05:23:35 -- common/autotest_common.sh@10 -- # set +x 00:26:16.583 { 00:26:16.583 "subsystems": [ 00:26:16.583 { 00:26:16.583 "subsystem": "bdev", 00:26:16.583 "config": [ 00:26:16.583 { 00:26:16.583 "params": { 00:26:16.583 "trtype": "pcie", 00:26:16.583 "traddr": "0000:00:06.0", 00:26:16.583 "name": "Nvme0" 00:26:16.583 }, 00:26:16.583 "method": "bdev_nvme_attach_controller" 00:26:16.583 }, 00:26:16.583 { 00:26:16.583 "method": "bdev_wait_for_examine" 00:26:16.583 } 00:26:16.583 ] 00:26:16.583 } 00:26:16.583 ] 00:26:16.583 } 00:26:16.583 [2024-07-26 05:23:35.505536] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:16.583 [2024-07-26 05:23:35.505809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88451 ] 00:26:16.583 [2024-07-26 05:23:35.657644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.841 [2024-07-26 05:23:35.806893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.036  Copying: 48/48 [kB] (average 46 MBps) 00:26:18.036 00:26:18.036 05:23:36 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:26:18.036 05:23:36 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:18.036 05:23:36 -- dd/common.sh@31 -- # xtrace_disable 00:26:18.036 05:23:36 -- common/autotest_common.sh@10 -- # set +x 00:26:18.036 { 00:26:18.036 "subsystems": [ 00:26:18.036 { 00:26:18.036 "subsystem": "bdev", 00:26:18.036 "config": [ 00:26:18.036 { 00:26:18.036 "params": { 00:26:18.036 "trtype": "pcie", 00:26:18.036 "traddr": "0000:00:06.0", 00:26:18.036 "name": "Nvme0" 00:26:18.036 }, 00:26:18.036 "method": "bdev_nvme_attach_controller" 00:26:18.036 }, 00:26:18.036 { 00:26:18.036 "method": "bdev_wait_for_examine" 00:26:18.036 } 00:26:18.036 ] 00:26:18.036 } 00:26:18.036 ] 00:26:18.036 } 00:26:18.036 [2024-07-26 05:23:36.915051] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:18.036 [2024-07-26 05:23:36.915216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88470 ] 00:26:18.036 [2024-07-26 05:23:37.084560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.294 [2024-07-26 05:23:37.235733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.488  Copying: 48/48 [kB] (average 46 MBps) 00:26:19.488 00:26:19.488 05:23:38 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:19.488 05:23:38 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:26:19.488 05:23:38 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:19.488 05:23:38 -- dd/common.sh@11 -- # local nvme_ref= 00:26:19.488 05:23:38 -- dd/common.sh@12 -- # local size=49152 00:26:19.488 05:23:38 -- dd/common.sh@14 -- # local bs=1048576 00:26:19.488 05:23:38 -- dd/common.sh@15 -- # local count=1 00:26:19.488 05:23:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:19.488 05:23:38 -- dd/common.sh@18 -- # gen_conf 00:26:19.488 05:23:38 -- dd/common.sh@31 -- # xtrace_disable 00:26:19.488 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:26:19.488 { 00:26:19.488 "subsystems": [ 00:26:19.488 { 00:26:19.488 "subsystem": "bdev", 00:26:19.488 "config": [ 00:26:19.488 { 00:26:19.488 "params": { 00:26:19.488 "trtype": "pcie", 00:26:19.488 "traddr": "0000:00:06.0", 00:26:19.488 "name": "Nvme0" 00:26:19.488 }, 00:26:19.488 "method": "bdev_nvme_attach_controller" 00:26:19.488 }, 00:26:19.488 { 00:26:19.488 "method": "bdev_wait_for_examine" 00:26:19.488 } 00:26:19.488 ] 00:26:19.488 } 00:26:19.488 ] 00:26:19.488 } 00:26:19.488 [2024-07-26 05:23:38.514481] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:19.488 [2024-07-26 05:23:38.514639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88496 ] 00:26:19.748 [2024-07-26 05:23:38.687090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.007 [2024-07-26 05:23:38.877459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.204  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:21.204 00:26:21.204 ************************************ 00:26:21.204 END TEST dd_rw 00:26:21.204 ************************************ 00:26:21.204 00:26:21.204 real 0m30.311s 00:26:21.204 user 0m24.838s 00:26:21.204 sys 0m3.769s 00:26:21.204 05:23:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.204 05:23:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.204 05:23:40 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:26:21.204 05:23:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.204 05:23:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.204 05:23:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.204 ************************************ 00:26:21.204 START TEST dd_rw_offset 00:26:21.204 ************************************ 00:26:21.204 05:23:40 -- common/autotest_common.sh@1104 -- # basic_offset 00:26:21.204 05:23:40 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:26:21.204 05:23:40 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:26:21.204 05:23:40 -- dd/common.sh@98 -- # xtrace_disable 00:26:21.204 05:23:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.204 05:23:40 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:26:21.205 05:23:40 -- dd/basic_rw.sh@56 -- # data=8uk44h4qnncnp8beaw4lsj7i0xt438h8dztjl799nfn6vk996baza64vo920rjc7e2jk8swxz9m61obfmvlvxk5ohui8teobb1596q9twtjhpm5vesv6exrqm43muhcor3f3w94dtx7vk1shiba7hfz7oeanag8v4zlafhe5i3euhe1xssjtjqfw2j06kl1258ixzknmx8uc6jirfp7bm5qefr48r745ihe4y4g6awfnjmbwy1ukgqlzzr89rysz6idy0xeu7ja4c02vkbjwpwolqgzrsphish1e6flpijghrp8dxhjkbha24fxco4l46pmlfq4cjvuxs5pkw39oxyt47ydp99iclv73h34e9ylzdyj5ty79dsssw7moppr5a8n1gi6l2nxjz10adctfwj3o1yyiru2460s2pxbe2w0hsakq1f1odirm4x8vh3uipxtac4fkzf8t21w5vfhwinrv96zau7pjdem5luyj18vhegbws3ght1jzquo4ts2bs9q1y8eaxk9b7r0a5nbbn2fjvnco1e2olmtdouvz4wsxptayh9r74lqk209wi699wphhlgiz0ntn3wvo0a7xhx1kn4xe10ludsetn1osiat0h8vqtk6mo4s4dq901jc5diksneeuy99nd0aneyo1wtrsappfbyvo0ioxf115focazc7j6gfc3xerir5k8koibv5wyqojwoucewsm3joznoubi9i2vh81z6u3bsl6lcsfaevrdii8qsm4kxtk084oxgkn9920ltc0gmqyozf6rm7ftt7koggtqd1njuz9o3cvwrit06pz5kj4e1ahdyh22oo7vchjhwob18nmkgbf8v1j7scguya3762xmhu0nekzkmvf4ksw94866txpjfs9psjlezyg70eosi3ejydaosxfdn8xn46opkcqo1uhjdsmdkf1pmbk8d0j60cd6kk10x3v4qeayf9u4gb0c8bavz9ycrkuhlx914bjyh5apudo83y6f0f79vsx4g5oqr77emvpb8dnp4hbdk86jims5s25bsptqzsvtxqeu5zeskqyr0zec90c9e0ij0gf74fh9jgaukj9p96dn58gcb1fg1eieif6fcwa4hs9v4lt0bluf723xvtju2bq88syuvo7hc0htsg9bdrrs079c9i5phytwyupvke0h1xl21nze6ifke0xaal1gi6266p6m2emlfiv7cq3ufcp7vb342fw86r30uit43m7lc0imi9cw1v1kza8cwx6y0xmnguhqtdgrn5n74bhpcxmvzhw2i92obvksbfhuildv2lqc6ukah8tt85ial486mzy3yvjedj6ut7ivssqk8ufri9zs46lg2rh6pvqvdr7npumzv25m0ohl9ti2armls4er85nkfosqpkoub9ya6ffy224gjs7qnepkouvftmdoildyyhdjkmitf5zwzsioimijudm9fbxm3gpx6msnzh2uamletrj41e2sxzoihgv1ay97v0xghohub4kd8pfen2q3k1l2ijfgliohm7ysppiajwt801jvvy8s06gahu3fyzqhlhvuyteu1f2zi38w2q8xnp9e6ph8qqozk3ef6mzlg34af6sszlpbkco0c54o568r4i2qqfds3olufaz67egd2x3mlf8k5r1vjotuymzppyxy7pntvwull8iuoehaxqlsba6z3zvltpmc319sswvh53esphp3nw2u7c7gn4s26cdetxyb7p9fws7vly1lsijzjp2ql6tuf0y3jd8r4h2vnfjm6vw8impbn9b31roto9hr8elyi7yujujswlkz4wrmt9uspavjwhyxntwuqdiw03yem713p1eb0h3rbdrfj3v13pod6wedpk12ta4aqsvvkkzd9ptt1lpr6s2d2kuag72a2kc0jqfe44bqx7rsg50hmr6q310h99c2q3jxvroe8ffwu5nf1g2240emw3wrd7fzrd4ko3me1y9ocinv1ze613l8ri8zx6zfnafj2yuskofsnae7l4z30mlph0p954pobiixot6ya6d99eucpgq1ue8e4gtzckpzv3o9ueohif35cm8v37d4suwo1ifttmfg44kmpwvrsu28ngsmd6jlti1zhqwtj4omvwhnd3pgql6velamn25xezyagzt145mjc8h8l23pp5q7p1fzd830r6s317z0bffpvooks3lpyc6p1qykvm1conz4lcs1ujye91fyd1iy8gdodsemg6oip8i3pqeradhoq7u1if5p9dhzjznyfov8dbvlbrpps14xnsh5d08b6fy52yj0nf12zcx21gc5xtteuqdxq1qo512u8vqiahkgqezn26708zzk8hqhdn569g3r2a4k9ls3tctcmlhr6yldt9qg82uud6w8r2x5dknwgzab1fn7a0iz8wq3xv1t7jshuprwmbxwgiodqcajecnc7pouh9x854qkjv663gv3hmofsgx6uiygs07mgkt6mivptk6umxsedyxpy0a166k9kme2gdag0aghq0dqgqh7z723yifj1p0bu2ga94s8ela6t8z7lqrx8l30e0nikqmhymd47mlqv24kw961xv5z5ki73ca9vcet24naxokphxbqbym5tgw5hijpb3s5h7h24m7c6mt1sr16gr9k6ta2zb8skabj4ka90afivdxwcj1g105ftm9m4ah17g0dyg5qvd1jst9u5utf6vv67gpwcai5jhlpfphzao7i0fphsgmei8xsy5xsrh6e7gniwi0r76df9j9nl9fabltqqy94lfsk2abbqzb1t9xmrp4e2b5oz8d11yaruy75xrskezhl93dw48496fdr6e1zca1szifs7axalgspwstesn6o302v8eg5542tk8iemchp3xwy8e85jepn76qzwmpvwthk537agphv16cw4i7yff1ppoa9039nhgud7o4jxiqh5db2ksxmmmwsmycf7ukgdb74nhhcxsb3rmyiagy81pol1w2ack2frlverk2gycdkuuxkgs3g0skwlyhwckyt235dwcjve909xvhtk2l47knhj605eji7f2jnqdtw4db2o8m2ara7cq09e05wq779tzcuf2h56d2hr5vmniyehqoljbapsltmbd9cj7hvly0hm8zt7h2fiiistt5k5691zpkllb6z7i3so7hvqq4y7dvev1gegh6j0vbee3zrl6yya8pw4mcofxfo9gn2vzu6g4qiq2n9kpoeuiqfarvrk1rnitfx1hgskthn4ev4mc372tii31g3al26uxg39w256vlt71i7u3047gxvvgofmdla1g6fmftmwc0xue7vytd07m0z6gys6zqgmonypccdyx6369rokwpwp71j675u1t88alxs5vv73h4o0ljn4k6co19bow35cs0noz8pxwjov762itt4p3339mea6zxdc9w5rfvc1sh6b1302r9y2df9fbjn8syg4yqw8rz1v0jk9k15yljykyw5vlvi7b5qi0otyi5536iwjlwbvvgfzya35w5u0okim73zbirq0t73cqnwdjmtruyv0p805aik1g94i3fl22xvxgb8p675c97f0qpquqha0xf3x41zw9lxtvvzoj3widkpfnnod41qnv0zu13dio90vbafgvckq1v86fmwssvwhunehx9d0j9ayo9e36byq83xwxh89n33x2y5tkz8xihgjpqozdcwdegot09174pnlg4r99f3ogi5h0nnp5kn4wt0hjrox2i9r6wzumpwlwrsg2q0fiitldgoh7g46xsyjlahdolbfn1vqhjvw2vdb9k4rim21faq7fbgb4t0b52waw056gnq95phohadvdqy051ptskubq4q67ra0fru8g9qt8pyp1p4ya2yhzha3ct6boij1p89hcoy4k2780w056494w49hbx6joadbu57n6mvfq5yn7e4muhswpkiom1xs9lu6c9k3540lo9ni1sa9pbemj8az0k6tfp0pskkkp0dibb021tsw5c87zujvnv78vnwp8jo8zdnfml2e6jk85lx01exva7uo09w2ouls3y78s0cgr5 00:26:21.205 05:23:40 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:26:21.205 05:23:40 -- dd/basic_rw.sh@59 -- # gen_conf 00:26:21.205 05:23:40 -- dd/common.sh@31 -- # xtrace_disable 00:26:21.205 05:23:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.205 { 00:26:21.205 "subsystems": [ 00:26:21.205 { 00:26:21.205 "subsystem": "bdev", 00:26:21.205 "config": [ 00:26:21.205 { 00:26:21.205 "params": { 00:26:21.205 "trtype": "pcie", 00:26:21.205 "traddr": "0000:00:06.0", 00:26:21.205 "name": "Nvme0" 00:26:21.205 }, 00:26:21.205 "method": "bdev_nvme_attach_controller" 00:26:21.205 }, 00:26:21.205 { 00:26:21.205 "method": "bdev_wait_for_examine" 00:26:21.205 } 00:26:21.205 ] 00:26:21.205 } 00:26:21.205 ] 00:26:21.205 } 00:26:21.205 [2024-07-26 05:23:40.172401] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:21.205 [2024-07-26 05:23:40.172557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88543 ] 00:26:21.465 [2024-07-26 05:23:40.345319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.465 [2024-07-26 05:23:40.567916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.001  Copying: 4096/4096 [B] (average 4000 kBps) 00:26:23.001 00:26:23.001 05:23:41 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:26:23.001 05:23:41 -- dd/basic_rw.sh@65 -- # gen_conf 00:26:23.001 05:23:41 -- dd/common.sh@31 -- # xtrace_disable 00:26:23.001 05:23:41 -- common/autotest_common.sh@10 -- # set +x 00:26:23.001 { 00:26:23.001 "subsystems": [ 00:26:23.001 { 00:26:23.001 "subsystem": "bdev", 00:26:23.001 "config": [ 00:26:23.001 { 00:26:23.001 "params": { 00:26:23.001 "trtype": "pcie", 00:26:23.001 "traddr": "0000:00:06.0", 00:26:23.001 "name": "Nvme0" 00:26:23.001 }, 00:26:23.001 "method": "bdev_nvme_attach_controller" 00:26:23.001 }, 00:26:23.001 { 00:26:23.001 "method": "bdev_wait_for_examine" 00:26:23.001 } 00:26:23.001 ] 00:26:23.001 } 00:26:23.001 ] 00:26:23.001 } 00:26:23.001 [2024-07-26 05:23:41.838849] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:23.001 [2024-07-26 05:23:41.839037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88562 ] 00:26:23.001 [2024-07-26 05:23:42.008607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.260 [2024-07-26 05:23:42.164622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.459  Copying: 4096/4096 [B] (average 4000 kBps) 00:26:24.459 00:26:24.459 05:23:43 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:26:24.459 05:23:43 -- dd/basic_rw.sh@72 -- # [[ 8uk44h4qnncnp8beaw4lsj7i0xt438h8dztjl799nfn6vk996baza64vo920rjc7e2jk8swxz9m61obfmvlvxk5ohui8teobb1596q9twtjhpm5vesv6exrqm43muhcor3f3w94dtx7vk1shiba7hfz7oeanag8v4zlafhe5i3euhe1xssjtjqfw2j06kl1258ixzknmx8uc6jirfp7bm5qefr48r745ihe4y4g6awfnjmbwy1ukgqlzzr89rysz6idy0xeu7ja4c02vkbjwpwolqgzrsphish1e6flpijghrp8dxhjkbha24fxco4l46pmlfq4cjvuxs5pkw39oxyt47ydp99iclv73h34e9ylzdyj5ty79dsssw7moppr5a8n1gi6l2nxjz10adctfwj3o1yyiru2460s2pxbe2w0hsakq1f1odirm4x8vh3uipxtac4fkzf8t21w5vfhwinrv96zau7pjdem5luyj18vhegbws3ght1jzquo4ts2bs9q1y8eaxk9b7r0a5nbbn2fjvnco1e2olmtdouvz4wsxptayh9r74lqk209wi699wphhlgiz0ntn3wvo0a7xhx1kn4xe10ludsetn1osiat0h8vqtk6mo4s4dq901jc5diksneeuy99nd0aneyo1wtrsappfbyvo0ioxf115focazc7j6gfc3xerir5k8koibv5wyqojwoucewsm3joznoubi9i2vh81z6u3bsl6lcsfaevrdii8qsm4kxtk084oxgkn9920ltc0gmqyozf6rm7ftt7koggtqd1njuz9o3cvwrit06pz5kj4e1ahdyh22oo7vchjhwob18nmkgbf8v1j7scguya3762xmhu0nekzkmvf4ksw94866txpjfs9psjlezyg70eosi3ejydaosxfdn8xn46opkcqo1uhjdsmdkf1pmbk8d0j60cd6kk10x3v4qeayf9u4gb0c8bavz9ycrkuhlx914bjyh5apudo83y6f0f79vsx4g5oqr77emvpb8dnp4hbdk86jims5s25bsptqzsvtxqeu5zeskqyr0zec90c9e0ij0gf74fh9jgaukj9p96dn58gcb1fg1eieif6fcwa4hs9v4lt0bluf723xvtju2bq88syuvo7hc0htsg9bdrrs079c9i5phytwyupvke0h1xl21nze6ifke0xaal1gi6266p6m2emlfiv7cq3ufcp7vb342fw86r30uit43m7lc0imi9cw1v1kza8cwx6y0xmnguhqtdgrn5n74bhpcxmvzhw2i92obvksbfhuildv2lqc6ukah8tt85ial486mzy3yvjedj6ut7ivssqk8ufri9zs46lg2rh6pvqvdr7npumzv25m0ohl9ti2armls4er85nkfosqpkoub9ya6ffy224gjs7qnepkouvftmdoildyyhdjkmitf5zwzsioimijudm9fbxm3gpx6msnzh2uamletrj41e2sxzoihgv1ay97v0xghohub4kd8pfen2q3k1l2ijfgliohm7ysppiajwt801jvvy8s06gahu3fyzqhlhvuyteu1f2zi38w2q8xnp9e6ph8qqozk3ef6mzlg34af6sszlpbkco0c54o568r4i2qqfds3olufaz67egd2x3mlf8k5r1vjotuymzppyxy7pntvwull8iuoehaxqlsba6z3zvltpmc319sswvh53esphp3nw2u7c7gn4s26cdetxyb7p9fws7vly1lsijzjp2ql6tuf0y3jd8r4h2vnfjm6vw8impbn9b31roto9hr8elyi7yujujswlkz4wrmt9uspavjwhyxntwuqdiw03yem713p1eb0h3rbdrfj3v13pod6wedpk12ta4aqsvvkkzd9ptt1lpr6s2d2kuag72a2kc0jqfe44bqx7rsg50hmr6q310h99c2q3jxvroe8ffwu5nf1g2240emw3wrd7fzrd4ko3me1y9ocinv1ze613l8ri8zx6zfnafj2yuskofsnae7l4z30mlph0p954pobiixot6ya6d99eucpgq1ue8e4gtzckpzv3o9ueohif35cm8v37d4suwo1ifttmfg44kmpwvrsu28ngsmd6jlti1zhqwtj4omvwhnd3pgql6velamn25xezyagzt145mjc8h8l23pp5q7p1fzd830r6s317z0bffpvooks3lpyc6p1qykvm1conz4lcs1ujye91fyd1iy8gdodsemg6oip8i3pqeradhoq7u1if5p9dhzjznyfov8dbvlbrpps14xnsh5d08b6fy52yj0nf12zcx21gc5xtteuqdxq1qo512u8vqiahkgqezn26708zzk8hqhdn569g3r2a4k9ls3tctcmlhr6yldt9qg82uud6w8r2x5dknwgzab1fn7a0iz8wq3xv1t7jshuprwmbxwgiodqcajecnc7pouh9x854qkjv663gv3hmofsgx6uiygs07mgkt6mivptk6umxsedyxpy0a166k9kme2gdag0aghq0dqgqh7z723yifj1p0bu2ga94s8ela6t8z7lqrx8l30e0nikqmhymd47mlqv24kw961xv5z5ki73ca9vcet24naxokphxbqbym5tgw5hijpb3s5h7h24m7c6mt1sr16gr9k6ta2zb8skabj4ka90afivdxwcj1g105ftm9m4ah17g0dyg5qvd1jst9u5utf6vv67gpwcai5jhlpfphzao7i0fphsgmei8xsy5xsrh6e7gniwi0r76df9j9nl9fabltqqy94lfsk2abbqzb1t9xmrp4e2b5oz8d11yaruy75xrskezhl93dw48496fdr6e1zca1szifs7axalgspwstesn6o302v8eg5542tk8iemchp3xwy8e85jepn76qzwmpvwthk537agphv16cw4i7yff1ppoa9039nhgud7o4jxiqh5db2ksxmmmwsmycf7ukgdb74nhhcxsb3rmyiagy81pol1w2ack2frlverk2gycdkuuxkgs3g0skwlyhwckyt235dwcjve909xvhtk2l47knhj605eji7f2jnqdtw4db2o8m2ara7cq09e05wq779tzcuf2h56d2hr5vmniyehqoljbapsltmbd9cj7hvly0hm8zt7h2fiiistt5k5691zpkllb6z7i3so7hvqq4y7dvev1gegh6j0vbee3zrl6yya8pw4mcofxfo9gn2vzu6g4qiq2n9kpoeuiqfarvrk1rnitfx1hgskthn4ev4mc372tii31g3al26uxg39w256vlt71i7u3047gxvvgofmdla1g6fmftmwc0xue7vytd07m0z6gys6zqgmonypccdyx6369rokwpwp71j675u1t88alxs5vv73h4o0ljn4k6co19bow35cs0noz8pxwjov762itt4p3339mea6zxdc9w5rfvc1sh6b1302r9y2df9fbjn8syg4yqw8rz1v0jk9k15yljykyw5vlvi7b5qi0otyi5536iwjlwbvvgfzya35w5u0okim73zbirq0t73cqnwdjmtruyv0p805aik1g94i3fl22xvxgb8p675c97f0qpquqha0xf3x41zw9lxtvvzoj3widkpfnnod41qnv0zu13dio90vbafgvckq1v86fmwssvwhunehx9d0j9ayo9e36byq83xwxh89n33x2y5tkz8xihgjpqozdcwdegot09174pnlg4r99f3ogi5h0nnp5kn4wt0hjrox2i9r6wzumpwlwrsg2q0fiitldgoh7g46xsyjlahdolbfn1vqhjvw2vdb9k4rim21faq7fbgb4t0b52waw056gnq95phohadvdqy051ptskubq4q67ra0fru8g9qt8pyp1p4ya2yhzha3ct6boij1p89hcoy4k2780w056494w49hbx6joadbu57n6mvfq5yn7e4muhswpkiom1xs9lu6c9k3540lo9ni1sa9pbemj8az0k6tfp0pskkkp0dibb021tsw5c87zujvnv78vnwp8jo8zdnfml2e6jk85lx01exva7uo09w2ouls3y78s0cgr5 == \8\u\k\4\4\h\4\q\n\n\c\n\p\8\b\e\a\w\4\l\s\j\7\i\0\x\t\4\3\8\h\8\d\z\t\j\l\7\9\9\n\f\n\6\v\k\9\9\6\b\a\z\a\6\4\v\o\9\2\0\r\j\c\7\e\2\j\k\8\s\w\x\z\9\m\6\1\o\b\f\m\v\l\v\x\k\5\o\h\u\i\8\t\e\o\b\b\1\5\9\6\q\9\t\w\t\j\h\p\m\5\v\e\s\v\6\e\x\r\q\m\4\3\m\u\h\c\o\r\3\f\3\w\9\4\d\t\x\7\v\k\1\s\h\i\b\a\7\h\f\z\7\o\e\a\n\a\g\8\v\4\z\l\a\f\h\e\5\i\3\e\u\h\e\1\x\s\s\j\t\j\q\f\w\2\j\0\6\k\l\1\2\5\8\i\x\z\k\n\m\x\8\u\c\6\j\i\r\f\p\7\b\m\5\q\e\f\r\4\8\r\7\4\5\i\h\e\4\y\4\g\6\a\w\f\n\j\m\b\w\y\1\u\k\g\q\l\z\z\r\8\9\r\y\s\z\6\i\d\y\0\x\e\u\7\j\a\4\c\0\2\v\k\b\j\w\p\w\o\l\q\g\z\r\s\p\h\i\s\h\1\e\6\f\l\p\i\j\g\h\r\p\8\d\x\h\j\k\b\h\a\2\4\f\x\c\o\4\l\4\6\p\m\l\f\q\4\c\j\v\u\x\s\5\p\k\w\3\9\o\x\y\t\4\7\y\d\p\9\9\i\c\l\v\7\3\h\3\4\e\9\y\l\z\d\y\j\5\t\y\7\9\d\s\s\s\w\7\m\o\p\p\r\5\a\8\n\1\g\i\6\l\2\n\x\j\z\1\0\a\d\c\t\f\w\j\3\o\1\y\y\i\r\u\2\4\6\0\s\2\p\x\b\e\2\w\0\h\s\a\k\q\1\f\1\o\d\i\r\m\4\x\8\v\h\3\u\i\p\x\t\a\c\4\f\k\z\f\8\t\2\1\w\5\v\f\h\w\i\n\r\v\9\6\z\a\u\7\p\j\d\e\m\5\l\u\y\j\1\8\v\h\e\g\b\w\s\3\g\h\t\1\j\z\q\u\o\4\t\s\2\b\s\9\q\1\y\8\e\a\x\k\9\b\7\r\0\a\5\n\b\b\n\2\f\j\v\n\c\o\1\e\2\o\l\m\t\d\o\u\v\z\4\w\s\x\p\t\a\y\h\9\r\7\4\l\q\k\2\0\9\w\i\6\9\9\w\p\h\h\l\g\i\z\0\n\t\n\3\w\v\o\0\a\7\x\h\x\1\k\n\4\x\e\1\0\l\u\d\s\e\t\n\1\o\s\i\a\t\0\h\8\v\q\t\k\6\m\o\4\s\4\d\q\9\0\1\j\c\5\d\i\k\s\n\e\e\u\y\9\9\n\d\0\a\n\e\y\o\1\w\t\r\s\a\p\p\f\b\y\v\o\0\i\o\x\f\1\1\5\f\o\c\a\z\c\7\j\6\g\f\c\3\x\e\r\i\r\5\k\8\k\o\i\b\v\5\w\y\q\o\j\w\o\u\c\e\w\s\m\3\j\o\z\n\o\u\b\i\9\i\2\v\h\8\1\z\6\u\3\b\s\l\6\l\c\s\f\a\e\v\r\d\i\i\8\q\s\m\4\k\x\t\k\0\8\4\o\x\g\k\n\9\9\2\0\l\t\c\0\g\m\q\y\o\z\f\6\r\m\7\f\t\t\7\k\o\g\g\t\q\d\1\n\j\u\z\9\o\3\c\v\w\r\i\t\0\6\p\z\5\k\j\4\e\1\a\h\d\y\h\2\2\o\o\7\v\c\h\j\h\w\o\b\1\8\n\m\k\g\b\f\8\v\1\j\7\s\c\g\u\y\a\3\7\6\2\x\m\h\u\0\n\e\k\z\k\m\v\f\4\k\s\w\9\4\8\6\6\t\x\p\j\f\s\9\p\s\j\l\e\z\y\g\7\0\e\o\s\i\3\e\j\y\d\a\o\s\x\f\d\n\8\x\n\4\6\o\p\k\c\q\o\1\u\h\j\d\s\m\d\k\f\1\p\m\b\k\8\d\0\j\6\0\c\d\6\k\k\1\0\x\3\v\4\q\e\a\y\f\9\u\4\g\b\0\c\8\b\a\v\z\9\y\c\r\k\u\h\l\x\9\1\4\b\j\y\h\5\a\p\u\d\o\8\3\y\6\f\0\f\7\9\v\s\x\4\g\5\o\q\r\7\7\e\m\v\p\b\8\d\n\p\4\h\b\d\k\8\6\j\i\m\s\5\s\2\5\b\s\p\t\q\z\s\v\t\x\q\e\u\5\z\e\s\k\q\y\r\0\z\e\c\9\0\c\9\e\0\i\j\0\g\f\7\4\f\h\9\j\g\a\u\k\j\9\p\9\6\d\n\5\8\g\c\b\1\f\g\1\e\i\e\i\f\6\f\c\w\a\4\h\s\9\v\4\l\t\0\b\l\u\f\7\2\3\x\v\t\j\u\2\b\q\8\8\s\y\u\v\o\7\h\c\0\h\t\s\g\9\b\d\r\r\s\0\7\9\c\9\i\5\p\h\y\t\w\y\u\p\v\k\e\0\h\1\x\l\2\1\n\z\e\6\i\f\k\e\0\x\a\a\l\1\g\i\6\2\6\6\p\6\m\2\e\m\l\f\i\v\7\c\q\3\u\f\c\p\7\v\b\3\4\2\f\w\8\6\r\3\0\u\i\t\4\3\m\7\l\c\0\i\m\i\9\c\w\1\v\1\k\z\a\8\c\w\x\6\y\0\x\m\n\g\u\h\q\t\d\g\r\n\5\n\7\4\b\h\p\c\x\m\v\z\h\w\2\i\9\2\o\b\v\k\s\b\f\h\u\i\l\d\v\2\l\q\c\6\u\k\a\h\8\t\t\8\5\i\a\l\4\8\6\m\z\y\3\y\v\j\e\d\j\6\u\t\7\i\v\s\s\q\k\8\u\f\r\i\9\z\s\4\6\l\g\2\r\h\6\p\v\q\v\d\r\7\n\p\u\m\z\v\2\5\m\0\o\h\l\9\t\i\2\a\r\m\l\s\4\e\r\8\5\n\k\f\o\s\q\p\k\o\u\b\9\y\a\6\f\f\y\2\2\4\g\j\s\7\q\n\e\p\k\o\u\v\f\t\m\d\o\i\l\d\y\y\h\d\j\k\m\i\t\f\5\z\w\z\s\i\o\i\m\i\j\u\d\m\9\f\b\x\m\3\g\p\x\6\m\s\n\z\h\2\u\a\m\l\e\t\r\j\4\1\e\2\s\x\z\o\i\h\g\v\1\a\y\9\7\v\0\x\g\h\o\h\u\b\4\k\d\8\p\f\e\n\2\q\3\k\1\l\2\i\j\f\g\l\i\o\h\m\7\y\s\p\p\i\a\j\w\t\8\0\1\j\v\v\y\8\s\0\6\g\a\h\u\3\f\y\z\q\h\l\h\v\u\y\t\e\u\1\f\2\z\i\3\8\w\2\q\8\x\n\p\9\e\6\p\h\8\q\q\o\z\k\3\e\f\6\m\z\l\g\3\4\a\f\6\s\s\z\l\p\b\k\c\o\0\c\5\4\o\5\6\8\r\4\i\2\q\q\f\d\s\3\o\l\u\f\a\z\6\7\e\g\d\2\x\3\m\l\f\8\k\5\r\1\v\j\o\t\u\y\m\z\p\p\y\x\y\7\p\n\t\v\w\u\l\l\8\i\u\o\e\h\a\x\q\l\s\b\a\6\z\3\z\v\l\t\p\m\c\3\1\9\s\s\w\v\h\5\3\e\s\p\h\p\3\n\w\2\u\7\c\7\g\n\4\s\2\6\c\d\e\t\x\y\b\7\p\9\f\w\s\7\v\l\y\1\l\s\i\j\z\j\p\2\q\l\6\t\u\f\0\y\3\j\d\8\r\4\h\2\v\n\f\j\m\6\v\w\8\i\m\p\b\n\9\b\3\1\r\o\t\o\9\h\r\8\e\l\y\i\7\y\u\j\u\j\s\w\l\k\z\4\w\r\m\t\9\u\s\p\a\v\j\w\h\y\x\n\t\w\u\q\d\i\w\0\3\y\e\m\7\1\3\p\1\e\b\0\h\3\r\b\d\r\f\j\3\v\1\3\p\o\d\6\w\e\d\p\k\1\2\t\a\4\a\q\s\v\v\k\k\z\d\9\p\t\t\1\l\p\r\6\s\2\d\2\k\u\a\g\7\2\a\2\k\c\0\j\q\f\e\4\4\b\q\x\7\r\s\g\5\0\h\m\r\6\q\3\1\0\h\9\9\c\2\q\3\j\x\v\r\o\e\8\f\f\w\u\5\n\f\1\g\2\2\4\0\e\m\w\3\w\r\d\7\f\z\r\d\4\k\o\3\m\e\1\y\9\o\c\i\n\v\1\z\e\6\1\3\l\8\r\i\8\z\x\6\z\f\n\a\f\j\2\y\u\s\k\o\f\s\n\a\e\7\l\4\z\3\0\m\l\p\h\0\p\9\5\4\p\o\b\i\i\x\o\t\6\y\a\6\d\9\9\e\u\c\p\g\q\1\u\e\8\e\4\g\t\z\c\k\p\z\v\3\o\9\u\e\o\h\i\f\3\5\c\m\8\v\3\7\d\4\s\u\w\o\1\i\f\t\t\m\f\g\4\4\k\m\p\w\v\r\s\u\2\8\n\g\s\m\d\6\j\l\t\i\1\z\h\q\w\t\j\4\o\m\v\w\h\n\d\3\p\g\q\l\6\v\e\l\a\m\n\2\5\x\e\z\y\a\g\z\t\1\4\5\m\j\c\8\h\8\l\2\3\p\p\5\q\7\p\1\f\z\d\8\3\0\r\6\s\3\1\7\z\0\b\f\f\p\v\o\o\k\s\3\l\p\y\c\6\p\1\q\y\k\v\m\1\c\o\n\z\4\l\c\s\1\u\j\y\e\9\1\f\y\d\1\i\y\8\g\d\o\d\s\e\m\g\6\o\i\p\8\i\3\p\q\e\r\a\d\h\o\q\7\u\1\i\f\5\p\9\d\h\z\j\z\n\y\f\o\v\8\d\b\v\l\b\r\p\p\s\1\4\x\n\s\h\5\d\0\8\b\6\f\y\5\2\y\j\0\n\f\1\2\z\c\x\2\1\g\c\5\x\t\t\e\u\q\d\x\q\1\q\o\5\1\2\u\8\v\q\i\a\h\k\g\q\e\z\n\2\6\7\0\8\z\z\k\8\h\q\h\d\n\5\6\9\g\3\r\2\a\4\k\9\l\s\3\t\c\t\c\m\l\h\r\6\y\l\d\t\9\q\g\8\2\u\u\d\6\w\8\r\2\x\5\d\k\n\w\g\z\a\b\1\f\n\7\a\0\i\z\8\w\q\3\x\v\1\t\7\j\s\h\u\p\r\w\m\b\x\w\g\i\o\d\q\c\a\j\e\c\n\c\7\p\o\u\h\9\x\8\5\4\q\k\j\v\6\6\3\g\v\3\h\m\o\f\s\g\x\6\u\i\y\g\s\0\7\m\g\k\t\6\m\i\v\p\t\k\6\u\m\x\s\e\d\y\x\p\y\0\a\1\6\6\k\9\k\m\e\2\g\d\a\g\0\a\g\h\q\0\d\q\g\q\h\7\z\7\2\3\y\i\f\j\1\p\0\b\u\2\g\a\9\4\s\8\e\l\a\6\t\8\z\7\l\q\r\x\8\l\3\0\e\0\n\i\k\q\m\h\y\m\d\4\7\m\l\q\v\2\4\k\w\9\6\1\x\v\5\z\5\k\i\7\3\c\a\9\v\c\e\t\2\4\n\a\x\o\k\p\h\x\b\q\b\y\m\5\t\g\w\5\h\i\j\p\b\3\s\5\h\7\h\2\4\m\7\c\6\m\t\1\s\r\1\6\g\r\9\k\6\t\a\2\z\b\8\s\k\a\b\j\4\k\a\9\0\a\f\i\v\d\x\w\c\j\1\g\1\0\5\f\t\m\9\m\4\a\h\1\7\g\0\d\y\g\5\q\v\d\1\j\s\t\9\u\5\u\t\f\6\v\v\6\7\g\p\w\c\a\i\5\j\h\l\p\f\p\h\z\a\o\7\i\0\f\p\h\s\g\m\e\i\8\x\s\y\5\x\s\r\h\6\e\7\g\n\i\w\i\0\r\7\6\d\f\9\j\9\n\l\9\f\a\b\l\t\q\q\y\9\4\l\f\s\k\2\a\b\b\q\z\b\1\t\9\x\m\r\p\4\e\2\b\5\o\z\8\d\1\1\y\a\r\u\y\7\5\x\r\s\k\e\z\h\l\9\3\d\w\4\8\4\9\6\f\d\r\6\e\1\z\c\a\1\s\z\i\f\s\7\a\x\a\l\g\s\p\w\s\t\e\s\n\6\o\3\0\2\v\8\e\g\5\5\4\2\t\k\8\i\e\m\c\h\p\3\x\w\y\8\e\8\5\j\e\p\n\7\6\q\z\w\m\p\v\w\t\h\k\5\3\7\a\g\p\h\v\1\6\c\w\4\i\7\y\f\f\1\p\p\o\a\9\0\3\9\n\h\g\u\d\7\o\4\j\x\i\q\h\5\d\b\2\k\s\x\m\m\m\w\s\m\y\c\f\7\u\k\g\d\b\7\4\n\h\h\c\x\s\b\3\r\m\y\i\a\g\y\8\1\p\o\l\1\w\2\a\c\k\2\f\r\l\v\e\r\k\2\g\y\c\d\k\u\u\x\k\g\s\3\g\0\s\k\w\l\y\h\w\c\k\y\t\2\3\5\d\w\c\j\v\e\9\0\9\x\v\h\t\k\2\l\4\7\k\n\h\j\6\0\5\e\j\i\7\f\2\j\n\q\d\t\w\4\d\b\2\o\8\m\2\a\r\a\7\c\q\0\9\e\0\5\w\q\7\7\9\t\z\c\u\f\2\h\5\6\d\2\h\r\5\v\m\n\i\y\e\h\q\o\l\j\b\a\p\s\l\t\m\b\d\9\c\j\7\h\v\l\y\0\h\m\8\z\t\7\h\2\f\i\i\i\s\t\t\5\k\5\6\9\1\z\p\k\l\l\b\6\z\7\i\3\s\o\7\h\v\q\q\4\y\7\d\v\e\v\1\g\e\g\h\6\j\0\v\b\e\e\3\z\r\l\6\y\y\a\8\p\w\4\m\c\o\f\x\f\o\9\g\n\2\v\z\u\6\g\4\q\i\q\2\n\9\k\p\o\e\u\i\q\f\a\r\v\r\k\1\r\n\i\t\f\x\1\h\g\s\k\t\h\n\4\e\v\4\m\c\3\7\2\t\i\i\3\1\g\3\a\l\2\6\u\x\g\3\9\w\2\5\6\v\l\t\7\1\i\7\u\3\0\4\7\g\x\v\v\g\o\f\m\d\l\a\1\g\6\f\m\f\t\m\w\c\0\x\u\e\7\v\y\t\d\0\7\m\0\z\6\g\y\s\6\z\q\g\m\o\n\y\p\c\c\d\y\x\6\3\6\9\r\o\k\w\p\w\p\7\1\j\6\7\5\u\1\t\8\8\a\l\x\s\5\v\v\7\3\h\4\o\0\l\j\n\4\k\6\c\o\1\9\b\o\w\3\5\c\s\0\n\o\z\8\p\x\w\j\o\v\7\6\2\i\t\t\4\p\3\3\3\9\m\e\a\6\z\x\d\c\9\w\5\r\f\v\c\1\s\h\6\b\1\3\0\2\r\9\y\2\d\f\9\f\b\j\n\8\s\y\g\4\y\q\w\8\r\z\1\v\0\j\k\9\k\1\5\y\l\j\y\k\y\w\5\v\l\v\i\7\b\5\q\i\0\o\t\y\i\5\5\3\6\i\w\j\l\w\b\v\v\g\f\z\y\a\3\5\w\5\u\0\o\k\i\m\7\3\z\b\i\r\q\0\t\7\3\c\q\n\w\d\j\m\t\r\u\y\v\0\p\8\0\5\a\i\k\1\g\9\4\i\3\f\l\2\2\x\v\x\g\b\8\p\6\7\5\c\9\7\f\0\q\p\q\u\q\h\a\0\x\f\3\x\4\1\z\w\9\l\x\t\v\v\z\o\j\3\w\i\d\k\p\f\n\n\o\d\4\1\q\n\v\0\z\u\1\3\d\i\o\9\0\v\b\a\f\g\v\c\k\q\1\v\8\6\f\m\w\s\s\v\w\h\u\n\e\h\x\9\d\0\j\9\a\y\o\9\e\3\6\b\y\q\8\3\x\w\x\h\8\9\n\3\3\x\2\y\5\t\k\z\8\x\i\h\g\j\p\q\o\z\d\c\w\d\e\g\o\t\0\9\1\7\4\p\n\l\g\4\r\9\9\f\3\o\g\i\5\h\0\n\n\p\5\k\n\4\w\t\0\h\j\r\o\x\2\i\9\r\6\w\z\u\m\p\w\l\w\r\s\g\2\q\0\f\i\i\t\l\d\g\o\h\7\g\4\6\x\s\y\j\l\a\h\d\o\l\b\f\n\1\v\q\h\j\v\w\2\v\d\b\9\k\4\r\i\m\2\1\f\a\q\7\f\b\g\b\4\t\0\b\5\2\w\a\w\0\5\6\g\n\q\9\5\p\h\o\h\a\d\v\d\q\y\0\5\1\p\t\s\k\u\b\q\4\q\6\7\r\a\0\f\r\u\8\g\9\q\t\8\p\y\p\1\p\4\y\a\2\y\h\z\h\a\3\c\t\6\b\o\i\j\1\p\8\9\h\c\o\y\4\k\2\7\8\0\w\0\5\6\4\9\4\w\4\9\h\b\x\6\j\o\a\d\b\u\5\7\n\6\m\v\f\q\5\y\n\7\e\4\m\u\h\s\w\p\k\i\o\m\1\x\s\9\l\u\6\c\9\k\3\5\4\0\l\o\9\n\i\1\s\a\9\p\b\e\m\j\8\a\z\0\k\6\t\f\p\0\p\s\k\k\k\p\0\d\i\b\b\0\2\1\t\s\w\5\c\8\7\z\u\j\v\n\v\7\8\v\n\w\p\8\j\o\8\z\d\n\f\m\l\2\e\6\j\k\8\5\l\x\0\1\e\x\v\a\7\u\o\0\9\w\2\o\u\l\s\3\y\7\8\s\0\c\g\r\5 ]] 00:26:24.459 00:26:24.459 real 0m3.231s 00:26:24.459 user 0m2.622s 00:26:24.459 sys 0m0.421s 00:26:24.459 05:23:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.459 05:23:43 -- common/autotest_common.sh@10 -- # set +x 00:26:24.459 ************************************ 00:26:24.459 END TEST dd_rw_offset 00:26:24.459 ************************************ 00:26:24.459 05:23:43 -- dd/basic_rw.sh@1 -- # cleanup 00:26:24.459 05:23:43 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:26:24.459 05:23:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:24.459 05:23:43 -- dd/common.sh@11 -- # local nvme_ref= 00:26:24.459 05:23:43 -- dd/common.sh@12 -- # local size=0xffff 00:26:24.459 05:23:43 -- dd/common.sh@14 -- # local bs=1048576 00:26:24.459 05:23:43 -- dd/common.sh@15 -- # local count=1 00:26:24.459 05:23:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:24.459 05:23:43 -- dd/common.sh@18 -- # gen_conf 00:26:24.459 05:23:43 -- dd/common.sh@31 -- # xtrace_disable 00:26:24.459 05:23:43 -- common/autotest_common.sh@10 -- # set +x 00:26:24.459 { 00:26:24.459 "subsystems": [ 00:26:24.459 { 00:26:24.459 "subsystem": "bdev", 00:26:24.459 "config": [ 00:26:24.459 { 00:26:24.459 "params": { 00:26:24.459 "trtype": "pcie", 00:26:24.459 "traddr": "0000:00:06.0", 00:26:24.459 "name": "Nvme0" 00:26:24.459 }, 00:26:24.459 "method": "bdev_nvme_attach_controller" 00:26:24.459 }, 00:26:24.459 { 00:26:24.459 "method": "bdev_wait_for_examine" 00:26:24.459 } 00:26:24.459 ] 00:26:24.459 } 00:26:24.459 ] 00:26:24.459 } 00:26:24.459 [2024-07-26 05:23:43.384322] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:24.460 [2024-07-26 05:23:43.384513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88602 ] 00:26:24.460 [2024-07-26 05:23:43.536471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.717 [2024-07-26 05:23:43.692601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.914  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:25.914 00:26:25.914 05:23:44 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:25.914 00:26:25.914 real 0m37.160s 00:26:25.914 user 0m30.180s 00:26:25.914 sys 0m4.828s 00:26:25.914 05:23:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.914 05:23:44 -- common/autotest_common.sh@10 -- # set +x 00:26:25.914 ************************************ 00:26:25.914 END TEST spdk_dd_basic_rw 00:26:25.914 ************************************ 00:26:25.914 05:23:44 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:26:25.914 05:23:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:25.914 05:23:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:25.914 05:23:44 -- common/autotest_common.sh@10 -- # set +x 00:26:25.914 ************************************ 00:26:25.914 START TEST spdk_dd_posix 00:26:25.914 ************************************ 00:26:25.914 05:23:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:26:26.174 * Looking for test storage... 00:26:26.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:26.174 05:23:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:26.174 05:23:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.174 05:23:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.174 05:23:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.174 05:23:45 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:26.174 05:23:45 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:26.174 05:23:45 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:26.174 05:23:45 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:26.174 05:23:45 -- paths/export.sh@6 -- # export PATH 00:26:26.174 05:23:45 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:26.174 05:23:45 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:26:26.174 05:23:45 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:26:26.174 05:23:45 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:26:26.174 05:23:45 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:26:26.174 05:23:45 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:26.174 05:23:45 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:26.174 05:23:45 -- dd/posix.sh@130 -- # tests 00:26:26.174 05:23:45 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:26:26.174 * First test run, liburing in use 00:26:26.174 05:23:45 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:26:26.174 05:23:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:26.174 05:23:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:26.174 05:23:45 -- common/autotest_common.sh@10 -- # set +x 00:26:26.174 ************************************ 00:26:26.174 START TEST dd_flag_append 00:26:26.174 ************************************ 00:26:26.174 05:23:45 -- common/autotest_common.sh@1104 -- # append 00:26:26.174 05:23:45 -- dd/posix.sh@16 -- # local dump0 00:26:26.174 05:23:45 -- dd/posix.sh@17 -- # local dump1 00:26:26.174 05:23:45 -- dd/posix.sh@19 -- # gen_bytes 32 00:26:26.174 05:23:45 -- dd/common.sh@98 -- # xtrace_disable 00:26:26.174 05:23:45 -- common/autotest_common.sh@10 -- # set +x 00:26:26.174 05:23:45 -- dd/posix.sh@19 -- # dump0=uyv7evdbfq0kyfncew7pie7x538ak2l5 00:26:26.174 05:23:45 -- dd/posix.sh@20 -- # gen_bytes 32 00:26:26.174 05:23:45 -- dd/common.sh@98 -- # xtrace_disable 00:26:26.174 05:23:45 -- common/autotest_common.sh@10 -- # set +x 00:26:26.174 05:23:45 -- dd/posix.sh@20 -- # dump1=k45t59ibjvl9wzj33obdmiwq1qdaugy3 00:26:26.174 05:23:45 -- dd/posix.sh@22 -- # printf %s uyv7evdbfq0kyfncew7pie7x538ak2l5 00:26:26.174 05:23:45 -- dd/posix.sh@23 -- # printf %s k45t59ibjvl9wzj33obdmiwq1qdaugy3 00:26:26.174 05:23:45 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:26:26.174 [2024-07-26 05:23:45.117372] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:26.174 [2024-07-26 05:23:45.117519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88670 ] 00:26:26.433 [2024-07-26 05:23:45.286100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.433 [2024-07-26 05:23:45.432166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.630  Copying: 32/32 [B] (average 31 kBps) 00:26:27.630 00:26:27.630 05:23:46 -- dd/posix.sh@27 -- # [[ k45t59ibjvl9wzj33obdmiwq1qdaugy3uyv7evdbfq0kyfncew7pie7x538ak2l5 == \k\4\5\t\5\9\i\b\j\v\l\9\w\z\j\3\3\o\b\d\m\i\w\q\1\q\d\a\u\g\y\3\u\y\v\7\e\v\d\b\f\q\0\k\y\f\n\c\e\w\7\p\i\e\7\x\5\3\8\a\k\2\l\5 ]] 00:26:27.630 00:26:27.630 real 0m1.504s 00:26:27.630 user 0m1.212s 00:26:27.630 sys 0m0.180s 00:26:27.630 05:23:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:27.630 05:23:46 -- common/autotest_common.sh@10 -- # set +x 00:26:27.630 ************************************ 00:26:27.630 END TEST dd_flag_append 00:26:27.630 ************************************ 00:26:27.630 05:23:46 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:26:27.630 05:23:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:27.630 05:23:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:27.630 05:23:46 -- common/autotest_common.sh@10 -- # set +x 00:26:27.630 ************************************ 00:26:27.630 START TEST dd_flag_directory 00:26:27.630 ************************************ 00:26:27.630 05:23:46 -- common/autotest_common.sh@1104 -- # directory 00:26:27.630 05:23:46 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:27.630 05:23:46 -- common/autotest_common.sh@640 -- # local es=0 00:26:27.630 05:23:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:27.630 05:23:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.630 05:23:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:27.630 05:23:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.630 05:23:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:27.630 05:23:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.630 05:23:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:27.630 05:23:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.630 05:23:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:27.630 05:23:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:27.630 [2024-07-26 05:23:46.664415] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:27.630 [2024-07-26 05:23:46.664578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88709 ] 00:26:27.889 [2024-07-26 05:23:46.833124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.889 [2024-07-26 05:23:46.981888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.148 [2024-07-26 05:23:47.205284] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:28.148 [2024-07-26 05:23:47.205354] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:28.148 [2024-07-26 05:23:47.205374] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:28.717 [2024-07-26 05:23:47.758850] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:29.285 05:23:48 -- common/autotest_common.sh@643 -- # es=236 00:26:29.285 05:23:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:29.285 05:23:48 -- common/autotest_common.sh@652 -- # es=108 00:26:29.285 05:23:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:29.285 05:23:48 -- common/autotest_common.sh@660 -- # es=1 00:26:29.285 05:23:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:29.285 05:23:48 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:29.285 05:23:48 -- common/autotest_common.sh@640 -- # local es=0 00:26:29.285 05:23:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:29.285 05:23:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.285 05:23:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:29.285 05:23:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.285 05:23:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:29.285 05:23:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.285 05:23:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:29.285 05:23:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.285 05:23:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:29.285 05:23:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:29.285 [2024-07-26 05:23:48.147388] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:29.285 [2024-07-26 05:23:48.147559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88725 ] 00:26:29.285 [2024-07-26 05:23:48.302042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.544 [2024-07-26 05:23:48.450632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.803 [2024-07-26 05:23:48.677298] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:29.803 [2024-07-26 05:23:48.677368] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:29.803 [2024-07-26 05:23:48.677388] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:30.371 [2024-07-26 05:23:49.222162] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:30.631 05:23:49 -- common/autotest_common.sh@643 -- # es=236 00:26:30.631 05:23:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:30.631 05:23:49 -- common/autotest_common.sh@652 -- # es=108 00:26:30.631 05:23:49 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:30.631 05:23:49 -- common/autotest_common.sh@660 -- # es=1 00:26:30.631 05:23:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:30.631 00:26:30.631 real 0m2.957s 00:26:30.631 user 0m2.364s 00:26:30.631 sys 0m0.391s 00:26:30.631 05:23:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:30.631 ************************************ 00:26:30.631 END TEST dd_flag_directory 00:26:30.631 ************************************ 00:26:30.631 05:23:49 -- common/autotest_common.sh@10 -- # set +x 00:26:30.631 05:23:49 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:26:30.631 05:23:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:30.631 05:23:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:30.631 05:23:49 -- common/autotest_common.sh@10 -- # set +x 00:26:30.631 ************************************ 00:26:30.631 START TEST dd_flag_nofollow 00:26:30.631 ************************************ 00:26:30.631 05:23:49 -- common/autotest_common.sh@1104 -- # nofollow 00:26:30.631 05:23:49 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:30.631 05:23:49 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:30.631 05:23:49 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:30.631 05:23:49 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:30.631 05:23:49 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:30.631 05:23:49 -- common/autotest_common.sh@640 -- # local es=0 00:26:30.631 05:23:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:30.631 05:23:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:30.631 05:23:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:30.631 05:23:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:30.631 05:23:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:30.631 05:23:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:30.631 05:23:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:30.631 05:23:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:30.631 05:23:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:30.631 05:23:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:30.631 [2024-07-26 05:23:49.679430] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:30.631 [2024-07-26 05:23:49.679584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88766 ] 00:26:30.890 [2024-07-26 05:23:49.848753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.890 [2024-07-26 05:23:49.998700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.149 [2024-07-26 05:23:50.218988] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:31.149 [2024-07-26 05:23:50.219112] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:31.149 [2024-07-26 05:23:50.219136] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:31.717 [2024-07-26 05:23:50.764940] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:32.286 05:23:51 -- common/autotest_common.sh@643 -- # es=216 00:26:32.286 05:23:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:32.286 05:23:51 -- common/autotest_common.sh@652 -- # es=88 00:26:32.286 05:23:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:32.286 05:23:51 -- common/autotest_common.sh@660 -- # es=1 00:26:32.286 05:23:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:32.286 05:23:51 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:32.286 05:23:51 -- common/autotest_common.sh@640 -- # local es=0 00:26:32.286 05:23:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:32.286 05:23:51 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:32.286 05:23:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:32.286 05:23:51 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:32.286 05:23:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:32.286 05:23:51 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:32.286 05:23:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:32.286 05:23:51 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:32.286 05:23:51 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:32.286 05:23:51 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:32.286 [2024-07-26 05:23:51.173836] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:32.286 [2024-07-26 05:23:51.174020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88786 ] 00:26:32.286 [2024-07-26 05:23:51.343034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.545 [2024-07-26 05:23:51.490258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.804 [2024-07-26 05:23:51.703560] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:32.804 [2024-07-26 05:23:51.703635] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:32.804 [2024-07-26 05:23:51.703656] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:33.371 [2024-07-26 05:23:52.260697] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:33.630 05:23:52 -- common/autotest_common.sh@643 -- # es=216 00:26:33.630 05:23:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:33.630 05:23:52 -- common/autotest_common.sh@652 -- # es=88 00:26:33.630 05:23:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:33.630 05:23:52 -- common/autotest_common.sh@660 -- # es=1 00:26:33.630 05:23:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:33.630 05:23:52 -- dd/posix.sh@46 -- # gen_bytes 512 00:26:33.630 05:23:52 -- dd/common.sh@98 -- # xtrace_disable 00:26:33.630 05:23:52 -- common/autotest_common.sh@10 -- # set +x 00:26:33.630 05:23:52 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:33.630 [2024-07-26 05:23:52.678563] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:33.630 [2024-07-26 05:23:52.678936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88807 ] 00:26:33.890 [2024-07-26 05:23:52.848704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.890 [2024-07-26 05:23:52.999554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.086  Copying: 512/512 [B] (average 500 kBps) 00:26:35.086 00:26:35.086 ************************************ 00:26:35.086 END TEST dd_flag_nofollow 00:26:35.086 ************************************ 00:26:35.086 05:23:54 -- dd/posix.sh@49 -- # [[ m22hz1k5xc3n915oxw2x891vd8gs2ev8fhap6tjd4rt8s4txk12hpcn5pqfkjbvy8ijq737rtqlg2uzdoa5hnj73v238s3pde9nazxu2fp8mbhyb19fv523vrl5eqgasd53fnmkd58smqbub2nqlsqyy6qrrtqritsl57du4honr1zlc48e8qp65yzy1yfbvrjj9lpajrk44b87txt89ifai7s5q8hbchzputyxfprdz22illuknr8vyd8yqg5j91nze4ox3ecjo05pkbaw1yxj6kxsg3wssjyfri7ocyuv8uv6o0h845d1rb07yx5v2cmb48vwhfuq6fm4zn9je2br7pxj86ybghoi1l9bq8glenhhu5u5vg3vye68q2dfrr6faplu46pe7mnusa3axd92v7sq9kf9hpu6y1vv17w7fbuf64zazj2juch66mqpzzl8rhb31fpx0f5uama3vv4wn213wy54vd1xw8obs65k5xgaen18ddfcex3krjm0l == \m\2\2\h\z\1\k\5\x\c\3\n\9\1\5\o\x\w\2\x\8\9\1\v\d\8\g\s\2\e\v\8\f\h\a\p\6\t\j\d\4\r\t\8\s\4\t\x\k\1\2\h\p\c\n\5\p\q\f\k\j\b\v\y\8\i\j\q\7\3\7\r\t\q\l\g\2\u\z\d\o\a\5\h\n\j\7\3\v\2\3\8\s\3\p\d\e\9\n\a\z\x\u\2\f\p\8\m\b\h\y\b\1\9\f\v\5\2\3\v\r\l\5\e\q\g\a\s\d\5\3\f\n\m\k\d\5\8\s\m\q\b\u\b\2\n\q\l\s\q\y\y\6\q\r\r\t\q\r\i\t\s\l\5\7\d\u\4\h\o\n\r\1\z\l\c\4\8\e\8\q\p\6\5\y\z\y\1\y\f\b\v\r\j\j\9\l\p\a\j\r\k\4\4\b\8\7\t\x\t\8\9\i\f\a\i\7\s\5\q\8\h\b\c\h\z\p\u\t\y\x\f\p\r\d\z\2\2\i\l\l\u\k\n\r\8\v\y\d\8\y\q\g\5\j\9\1\n\z\e\4\o\x\3\e\c\j\o\0\5\p\k\b\a\w\1\y\x\j\6\k\x\s\g\3\w\s\s\j\y\f\r\i\7\o\c\y\u\v\8\u\v\6\o\0\h\8\4\5\d\1\r\b\0\7\y\x\5\v\2\c\m\b\4\8\v\w\h\f\u\q\6\f\m\4\z\n\9\j\e\2\b\r\7\p\x\j\8\6\y\b\g\h\o\i\1\l\9\b\q\8\g\l\e\n\h\h\u\5\u\5\v\g\3\v\y\e\6\8\q\2\d\f\r\r\6\f\a\p\l\u\4\6\p\e\7\m\n\u\s\a\3\a\x\d\9\2\v\7\s\q\9\k\f\9\h\p\u\6\y\1\v\v\1\7\w\7\f\b\u\f\6\4\z\a\z\j\2\j\u\c\h\6\6\m\q\p\z\z\l\8\r\h\b\3\1\f\p\x\0\f\5\u\a\m\a\3\v\v\4\w\n\2\1\3\w\y\5\4\v\d\1\x\w\8\o\b\s\6\5\k\5\x\g\a\e\n\1\8\d\d\f\c\e\x\3\k\r\j\m\0\l ]] 00:26:35.086 00:26:35.086 real 0m4.515s 00:26:35.086 user 0m3.629s 00:26:35.086 sys 0m0.572s 00:26:35.086 05:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.086 05:23:54 -- common/autotest_common.sh@10 -- # set +x 00:26:35.086 05:23:54 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:26:35.086 05:23:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:35.086 05:23:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:35.086 05:23:54 -- common/autotest_common.sh@10 -- # set +x 00:26:35.086 ************************************ 00:26:35.086 START TEST dd_flag_noatime 00:26:35.086 ************************************ 00:26:35.086 05:23:54 -- common/autotest_common.sh@1104 -- # noatime 00:26:35.086 05:23:54 -- dd/posix.sh@53 -- # local atime_if 00:26:35.086 05:23:54 -- dd/posix.sh@54 -- # local atime_of 00:26:35.086 05:23:54 -- dd/posix.sh@58 -- # gen_bytes 512 00:26:35.086 05:23:54 -- dd/common.sh@98 -- # xtrace_disable 00:26:35.086 05:23:54 -- common/autotest_common.sh@10 -- # set +x 00:26:35.086 05:23:54 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:35.086 05:23:54 -- dd/posix.sh@60 -- # atime_if=1721971433 00:26:35.086 05:23:54 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:35.086 05:23:54 -- dd/posix.sh@61 -- # atime_of=1721971434 00:26:35.086 05:23:54 -- dd/posix.sh@66 -- # sleep 1 00:26:36.463 05:23:55 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:36.463 [2024-07-26 05:23:55.259429] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:36.463 [2024-07-26 05:23:55.259596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88854 ] 00:26:36.463 [2024-07-26 05:23:55.429806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.722 [2024-07-26 05:23:55.579271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.685  Copying: 512/512 [B] (average 500 kBps) 00:26:37.685 00:26:37.685 05:23:56 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:37.685 05:23:56 -- dd/posix.sh@69 -- # (( atime_if == 1721971433 )) 00:26:37.685 05:23:56 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:37.685 05:23:56 -- dd/posix.sh@70 -- # (( atime_of == 1721971434 )) 00:26:37.685 05:23:56 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:37.685 [2024-07-26 05:23:56.784426] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:37.685 [2024-07-26 05:23:56.784795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88878 ] 00:26:37.944 [2024-07-26 05:23:56.952871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.203 [2024-07-26 05:23:57.106821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.399  Copying: 512/512 [B] (average 500 kBps) 00:26:39.399 00:26:39.399 05:23:58 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:39.399 ************************************ 00:26:39.399 END TEST dd_flag_noatime 00:26:39.399 ************************************ 00:26:39.399 05:23:58 -- dd/posix.sh@73 -- # (( atime_if < 1721971437 )) 00:26:39.399 00:26:39.399 real 0m4.063s 00:26:39.399 user 0m2.427s 00:26:39.399 sys 0m0.409s 00:26:39.399 05:23:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.399 05:23:58 -- common/autotest_common.sh@10 -- # set +x 00:26:39.399 05:23:58 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:26:39.399 05:23:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:39.399 05:23:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:39.399 05:23:58 -- common/autotest_common.sh@10 -- # set +x 00:26:39.399 ************************************ 00:26:39.399 START TEST dd_flags_misc 00:26:39.399 ************************************ 00:26:39.399 05:23:58 -- common/autotest_common.sh@1104 -- # io 00:26:39.399 05:23:58 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:26:39.399 05:23:58 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:26:39.399 05:23:58 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:26:39.399 05:23:58 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:39.399 05:23:58 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:39.399 05:23:58 -- dd/common.sh@98 -- # xtrace_disable 00:26:39.399 05:23:58 -- common/autotest_common.sh@10 -- # set +x 00:26:39.399 05:23:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:39.399 05:23:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:39.399 [2024-07-26 05:23:58.347395] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:39.399 [2024-07-26 05:23:58.347712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88912 ] 00:26:39.399 [2024-07-26 05:23:58.503980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.658 [2024-07-26 05:23:58.655233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.853  Copying: 512/512 [B] (average 500 kBps) 00:26:40.853 00:26:40.853 05:23:59 -- dd/posix.sh@93 -- # [[ oaud0z6vuiv87nwtbueag5fodvqihyoyx8um0kf5k7kdb6vwmmnr4ptd0pk5n6w05e0ffo0qhycydyaz2vgqrtsprd1hwfpxy9lu6xczovbnxct6h63v0q211i8s95z9uvijj3ynlimwfv87c4rtjr9eqqj0xh3hfeto3j1ic1zgp6lajp0lluz6gn1cg5yjv6ve6mt8yhkqdzwvgkkcvvlb66dt24texnuc7a79rblu11c1rxfxa0zxy4yt39qt4pwllri8q3sb12602pemexwyc42zm37jjrxpv79ynaq9snsxdfxw18w145mqc58jtqqhq20vqs6nzs989z2hkcecnb9f4fhy1hk3du9h9vxkbaxw90pwatahcsmfrrczjiswgu5chapxq7o4g0tqouspbeosyp10p8357y1jrkk2zlb7ea48u6orcpo1qygtqb4skeo6a60vqilypblo1aauemzva5pnk5s96h5yiocp1ztlt8ns7erz3flkdasm == \o\a\u\d\0\z\6\v\u\i\v\8\7\n\w\t\b\u\e\a\g\5\f\o\d\v\q\i\h\y\o\y\x\8\u\m\0\k\f\5\k\7\k\d\b\6\v\w\m\m\n\r\4\p\t\d\0\p\k\5\n\6\w\0\5\e\0\f\f\o\0\q\h\y\c\y\d\y\a\z\2\v\g\q\r\t\s\p\r\d\1\h\w\f\p\x\y\9\l\u\6\x\c\z\o\v\b\n\x\c\t\6\h\6\3\v\0\q\2\1\1\i\8\s\9\5\z\9\u\v\i\j\j\3\y\n\l\i\m\w\f\v\8\7\c\4\r\t\j\r\9\e\q\q\j\0\x\h\3\h\f\e\t\o\3\j\1\i\c\1\z\g\p\6\l\a\j\p\0\l\l\u\z\6\g\n\1\c\g\5\y\j\v\6\v\e\6\m\t\8\y\h\k\q\d\z\w\v\g\k\k\c\v\v\l\b\6\6\d\t\2\4\t\e\x\n\u\c\7\a\7\9\r\b\l\u\1\1\c\1\r\x\f\x\a\0\z\x\y\4\y\t\3\9\q\t\4\p\w\l\l\r\i\8\q\3\s\b\1\2\6\0\2\p\e\m\e\x\w\y\c\4\2\z\m\3\7\j\j\r\x\p\v\7\9\y\n\a\q\9\s\n\s\x\d\f\x\w\1\8\w\1\4\5\m\q\c\5\8\j\t\q\q\h\q\2\0\v\q\s\6\n\z\s\9\8\9\z\2\h\k\c\e\c\n\b\9\f\4\f\h\y\1\h\k\3\d\u\9\h\9\v\x\k\b\a\x\w\9\0\p\w\a\t\a\h\c\s\m\f\r\r\c\z\j\i\s\w\g\u\5\c\h\a\p\x\q\7\o\4\g\0\t\q\o\u\s\p\b\e\o\s\y\p\1\0\p\8\3\5\7\y\1\j\r\k\k\2\z\l\b\7\e\a\4\8\u\6\o\r\c\p\o\1\q\y\g\t\q\b\4\s\k\e\o\6\a\6\0\v\q\i\l\y\p\b\l\o\1\a\a\u\e\m\z\v\a\5\p\n\k\5\s\9\6\h\5\y\i\o\c\p\1\z\t\l\t\8\n\s\7\e\r\z\3\f\l\k\d\a\s\m ]] 00:26:40.853 05:23:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:40.853 05:23:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:40.853 [2024-07-26 05:23:59.843903] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:40.853 [2024-07-26 05:23:59.844080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88936 ] 00:26:41.112 [2024-07-26 05:24:00.013596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.112 [2024-07-26 05:24:00.172734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.306  Copying: 512/512 [B] (average 500 kBps) 00:26:42.306 00:26:42.306 05:24:01 -- dd/posix.sh@93 -- # [[ oaud0z6vuiv87nwtbueag5fodvqihyoyx8um0kf5k7kdb6vwmmnr4ptd0pk5n6w05e0ffo0qhycydyaz2vgqrtsprd1hwfpxy9lu6xczovbnxct6h63v0q211i8s95z9uvijj3ynlimwfv87c4rtjr9eqqj0xh3hfeto3j1ic1zgp6lajp0lluz6gn1cg5yjv6ve6mt8yhkqdzwvgkkcvvlb66dt24texnuc7a79rblu11c1rxfxa0zxy4yt39qt4pwllri8q3sb12602pemexwyc42zm37jjrxpv79ynaq9snsxdfxw18w145mqc58jtqqhq20vqs6nzs989z2hkcecnb9f4fhy1hk3du9h9vxkbaxw90pwatahcsmfrrczjiswgu5chapxq7o4g0tqouspbeosyp10p8357y1jrkk2zlb7ea48u6orcpo1qygtqb4skeo6a60vqilypblo1aauemzva5pnk5s96h5yiocp1ztlt8ns7erz3flkdasm == \o\a\u\d\0\z\6\v\u\i\v\8\7\n\w\t\b\u\e\a\g\5\f\o\d\v\q\i\h\y\o\y\x\8\u\m\0\k\f\5\k\7\k\d\b\6\v\w\m\m\n\r\4\p\t\d\0\p\k\5\n\6\w\0\5\e\0\f\f\o\0\q\h\y\c\y\d\y\a\z\2\v\g\q\r\t\s\p\r\d\1\h\w\f\p\x\y\9\l\u\6\x\c\z\o\v\b\n\x\c\t\6\h\6\3\v\0\q\2\1\1\i\8\s\9\5\z\9\u\v\i\j\j\3\y\n\l\i\m\w\f\v\8\7\c\4\r\t\j\r\9\e\q\q\j\0\x\h\3\h\f\e\t\o\3\j\1\i\c\1\z\g\p\6\l\a\j\p\0\l\l\u\z\6\g\n\1\c\g\5\y\j\v\6\v\e\6\m\t\8\y\h\k\q\d\z\w\v\g\k\k\c\v\v\l\b\6\6\d\t\2\4\t\e\x\n\u\c\7\a\7\9\r\b\l\u\1\1\c\1\r\x\f\x\a\0\z\x\y\4\y\t\3\9\q\t\4\p\w\l\l\r\i\8\q\3\s\b\1\2\6\0\2\p\e\m\e\x\w\y\c\4\2\z\m\3\7\j\j\r\x\p\v\7\9\y\n\a\q\9\s\n\s\x\d\f\x\w\1\8\w\1\4\5\m\q\c\5\8\j\t\q\q\h\q\2\0\v\q\s\6\n\z\s\9\8\9\z\2\h\k\c\e\c\n\b\9\f\4\f\h\y\1\h\k\3\d\u\9\h\9\v\x\k\b\a\x\w\9\0\p\w\a\t\a\h\c\s\m\f\r\r\c\z\j\i\s\w\g\u\5\c\h\a\p\x\q\7\o\4\g\0\t\q\o\u\s\p\b\e\o\s\y\p\1\0\p\8\3\5\7\y\1\j\r\k\k\2\z\l\b\7\e\a\4\8\u\6\o\r\c\p\o\1\q\y\g\t\q\b\4\s\k\e\o\6\a\6\0\v\q\i\l\y\p\b\l\o\1\a\a\u\e\m\z\v\a\5\p\n\k\5\s\9\6\h\5\y\i\o\c\p\1\z\t\l\t\8\n\s\7\e\r\z\3\f\l\k\d\a\s\m ]] 00:26:42.306 05:24:01 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:42.306 05:24:01 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:42.306 [2024-07-26 05:24:01.407393] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:42.306 [2024-07-26 05:24:01.407552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88950 ] 00:26:42.564 [2024-07-26 05:24:01.577688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.822 [2024-07-26 05:24:01.733924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.017  Copying: 512/512 [B] (average 100 kBps) 00:26:44.017 00:26:44.017 05:24:02 -- dd/posix.sh@93 -- # [[ oaud0z6vuiv87nwtbueag5fodvqihyoyx8um0kf5k7kdb6vwmmnr4ptd0pk5n6w05e0ffo0qhycydyaz2vgqrtsprd1hwfpxy9lu6xczovbnxct6h63v0q211i8s95z9uvijj3ynlimwfv87c4rtjr9eqqj0xh3hfeto3j1ic1zgp6lajp0lluz6gn1cg5yjv6ve6mt8yhkqdzwvgkkcvvlb66dt24texnuc7a79rblu11c1rxfxa0zxy4yt39qt4pwllri8q3sb12602pemexwyc42zm37jjrxpv79ynaq9snsxdfxw18w145mqc58jtqqhq20vqs6nzs989z2hkcecnb9f4fhy1hk3du9h9vxkbaxw90pwatahcsmfrrczjiswgu5chapxq7o4g0tqouspbeosyp10p8357y1jrkk2zlb7ea48u6orcpo1qygtqb4skeo6a60vqilypblo1aauemzva5pnk5s96h5yiocp1ztlt8ns7erz3flkdasm == \o\a\u\d\0\z\6\v\u\i\v\8\7\n\w\t\b\u\e\a\g\5\f\o\d\v\q\i\h\y\o\y\x\8\u\m\0\k\f\5\k\7\k\d\b\6\v\w\m\m\n\r\4\p\t\d\0\p\k\5\n\6\w\0\5\e\0\f\f\o\0\q\h\y\c\y\d\y\a\z\2\v\g\q\r\t\s\p\r\d\1\h\w\f\p\x\y\9\l\u\6\x\c\z\o\v\b\n\x\c\t\6\h\6\3\v\0\q\2\1\1\i\8\s\9\5\z\9\u\v\i\j\j\3\y\n\l\i\m\w\f\v\8\7\c\4\r\t\j\r\9\e\q\q\j\0\x\h\3\h\f\e\t\o\3\j\1\i\c\1\z\g\p\6\l\a\j\p\0\l\l\u\z\6\g\n\1\c\g\5\y\j\v\6\v\e\6\m\t\8\y\h\k\q\d\z\w\v\g\k\k\c\v\v\l\b\6\6\d\t\2\4\t\e\x\n\u\c\7\a\7\9\r\b\l\u\1\1\c\1\r\x\f\x\a\0\z\x\y\4\y\t\3\9\q\t\4\p\w\l\l\r\i\8\q\3\s\b\1\2\6\0\2\p\e\m\e\x\w\y\c\4\2\z\m\3\7\j\j\r\x\p\v\7\9\y\n\a\q\9\s\n\s\x\d\f\x\w\1\8\w\1\4\5\m\q\c\5\8\j\t\q\q\h\q\2\0\v\q\s\6\n\z\s\9\8\9\z\2\h\k\c\e\c\n\b\9\f\4\f\h\y\1\h\k\3\d\u\9\h\9\v\x\k\b\a\x\w\9\0\p\w\a\t\a\h\c\s\m\f\r\r\c\z\j\i\s\w\g\u\5\c\h\a\p\x\q\7\o\4\g\0\t\q\o\u\s\p\b\e\o\s\y\p\1\0\p\8\3\5\7\y\1\j\r\k\k\2\z\l\b\7\e\a\4\8\u\6\o\r\c\p\o\1\q\y\g\t\q\b\4\s\k\e\o\6\a\6\0\v\q\i\l\y\p\b\l\o\1\a\a\u\e\m\z\v\a\5\p\n\k\5\s\9\6\h\5\y\i\o\c\p\1\z\t\l\t\8\n\s\7\e\r\z\3\f\l\k\d\a\s\m ]] 00:26:44.017 05:24:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:44.017 05:24:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:44.017 [2024-07-26 05:24:02.991554] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:44.017 [2024-07-26 05:24:02.991763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88970 ] 00:26:44.276 [2024-07-26 05:24:03.161773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.276 [2024-07-26 05:24:03.323763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.473  Copying: 512/512 [B] (average 83 kBps) 00:26:45.473 00:26:45.473 05:24:04 -- dd/posix.sh@93 -- # [[ oaud0z6vuiv87nwtbueag5fodvqihyoyx8um0kf5k7kdb6vwmmnr4ptd0pk5n6w05e0ffo0qhycydyaz2vgqrtsprd1hwfpxy9lu6xczovbnxct6h63v0q211i8s95z9uvijj3ynlimwfv87c4rtjr9eqqj0xh3hfeto3j1ic1zgp6lajp0lluz6gn1cg5yjv6ve6mt8yhkqdzwvgkkcvvlb66dt24texnuc7a79rblu11c1rxfxa0zxy4yt39qt4pwllri8q3sb12602pemexwyc42zm37jjrxpv79ynaq9snsxdfxw18w145mqc58jtqqhq20vqs6nzs989z2hkcecnb9f4fhy1hk3du9h9vxkbaxw90pwatahcsmfrrczjiswgu5chapxq7o4g0tqouspbeosyp10p8357y1jrkk2zlb7ea48u6orcpo1qygtqb4skeo6a60vqilypblo1aauemzva5pnk5s96h5yiocp1ztlt8ns7erz3flkdasm == \o\a\u\d\0\z\6\v\u\i\v\8\7\n\w\t\b\u\e\a\g\5\f\o\d\v\q\i\h\y\o\y\x\8\u\m\0\k\f\5\k\7\k\d\b\6\v\w\m\m\n\r\4\p\t\d\0\p\k\5\n\6\w\0\5\e\0\f\f\o\0\q\h\y\c\y\d\y\a\z\2\v\g\q\r\t\s\p\r\d\1\h\w\f\p\x\y\9\l\u\6\x\c\z\o\v\b\n\x\c\t\6\h\6\3\v\0\q\2\1\1\i\8\s\9\5\z\9\u\v\i\j\j\3\y\n\l\i\m\w\f\v\8\7\c\4\r\t\j\r\9\e\q\q\j\0\x\h\3\h\f\e\t\o\3\j\1\i\c\1\z\g\p\6\l\a\j\p\0\l\l\u\z\6\g\n\1\c\g\5\y\j\v\6\v\e\6\m\t\8\y\h\k\q\d\z\w\v\g\k\k\c\v\v\l\b\6\6\d\t\2\4\t\e\x\n\u\c\7\a\7\9\r\b\l\u\1\1\c\1\r\x\f\x\a\0\z\x\y\4\y\t\3\9\q\t\4\p\w\l\l\r\i\8\q\3\s\b\1\2\6\0\2\p\e\m\e\x\w\y\c\4\2\z\m\3\7\j\j\r\x\p\v\7\9\y\n\a\q\9\s\n\s\x\d\f\x\w\1\8\w\1\4\5\m\q\c\5\8\j\t\q\q\h\q\2\0\v\q\s\6\n\z\s\9\8\9\z\2\h\k\c\e\c\n\b\9\f\4\f\h\y\1\h\k\3\d\u\9\h\9\v\x\k\b\a\x\w\9\0\p\w\a\t\a\h\c\s\m\f\r\r\c\z\j\i\s\w\g\u\5\c\h\a\p\x\q\7\o\4\g\0\t\q\o\u\s\p\b\e\o\s\y\p\1\0\p\8\3\5\7\y\1\j\r\k\k\2\z\l\b\7\e\a\4\8\u\6\o\r\c\p\o\1\q\y\g\t\q\b\4\s\k\e\o\6\a\6\0\v\q\i\l\y\p\b\l\o\1\a\a\u\e\m\z\v\a\5\p\n\k\5\s\9\6\h\5\y\i\o\c\p\1\z\t\l\t\8\n\s\7\e\r\z\3\f\l\k\d\a\s\m ]] 00:26:45.473 05:24:04 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:45.473 05:24:04 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:45.473 05:24:04 -- dd/common.sh@98 -- # xtrace_disable 00:26:45.473 05:24:04 -- common/autotest_common.sh@10 -- # set +x 00:26:45.473 05:24:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:45.473 05:24:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:45.473 [2024-07-26 05:24:04.549702] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:45.473 [2024-07-26 05:24:04.549849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88990 ] 00:26:45.731 [2024-07-26 05:24:04.719838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.990 [2024-07-26 05:24:04.872259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.927  Copying: 512/512 [B] (average 500 kBps) 00:26:46.927 00:26:46.928 05:24:06 -- dd/posix.sh@93 -- # [[ 0a622nkl1svb9kibpmleoxuqaimq1hr1hl8xfj5msbs8i7pewdu6w2xkk8ux2r4ys493go8xwmgljtaona5ikyg7k2tdyta1yhnzoy0sr1lti2pe1ri7k0iux6tyn9k0be0x5qny26di29r237hcindivjhq3gnx41ie7bleptwwtf0eq711z7kynni48oaxp3f7rury1hg2gpzbnxsdhrv4p6cg781dx3g7x8zr4ujnc7dcqj3potpzko2i46hzr7u6fmj3ik6tsia2xffmwrlq6s9s7rfg9m37yowxnkibpzadeq9kpn9yljf9xeodhwumelsuufnhfpvbglbsol7qlud96zgsqw6lwhoagp0vukkacekl6p2f0cgmeg9dgf286dkrr23i4lf4vsq0d0xkgz39lf0dqxcbhn5n2mujjpsxgfl6gh20dn1aln4qlj5acrkx5ppf08byb67bf3ryscw2qa4a7rxm2qi4f0isyeh7x836ognh46sjnao9 == \0\a\6\2\2\n\k\l\1\s\v\b\9\k\i\b\p\m\l\e\o\x\u\q\a\i\m\q\1\h\r\1\h\l\8\x\f\j\5\m\s\b\s\8\i\7\p\e\w\d\u\6\w\2\x\k\k\8\u\x\2\r\4\y\s\4\9\3\g\o\8\x\w\m\g\l\j\t\a\o\n\a\5\i\k\y\g\7\k\2\t\d\y\t\a\1\y\h\n\z\o\y\0\s\r\1\l\t\i\2\p\e\1\r\i\7\k\0\i\u\x\6\t\y\n\9\k\0\b\e\0\x\5\q\n\y\2\6\d\i\2\9\r\2\3\7\h\c\i\n\d\i\v\j\h\q\3\g\n\x\4\1\i\e\7\b\l\e\p\t\w\w\t\f\0\e\q\7\1\1\z\7\k\y\n\n\i\4\8\o\a\x\p\3\f\7\r\u\r\y\1\h\g\2\g\p\z\b\n\x\s\d\h\r\v\4\p\6\c\g\7\8\1\d\x\3\g\7\x\8\z\r\4\u\j\n\c\7\d\c\q\j\3\p\o\t\p\z\k\o\2\i\4\6\h\z\r\7\u\6\f\m\j\3\i\k\6\t\s\i\a\2\x\f\f\m\w\r\l\q\6\s\9\s\7\r\f\g\9\m\3\7\y\o\w\x\n\k\i\b\p\z\a\d\e\q\9\k\p\n\9\y\l\j\f\9\x\e\o\d\h\w\u\m\e\l\s\u\u\f\n\h\f\p\v\b\g\l\b\s\o\l\7\q\l\u\d\9\6\z\g\s\q\w\6\l\w\h\o\a\g\p\0\v\u\k\k\a\c\e\k\l\6\p\2\f\0\c\g\m\e\g\9\d\g\f\2\8\6\d\k\r\r\2\3\i\4\l\f\4\v\s\q\0\d\0\x\k\g\z\3\9\l\f\0\d\q\x\c\b\h\n\5\n\2\m\u\j\j\p\s\x\g\f\l\6\g\h\2\0\d\n\1\a\l\n\4\q\l\j\5\a\c\r\k\x\5\p\p\f\0\8\b\y\b\6\7\b\f\3\r\y\s\c\w\2\q\a\4\a\7\r\x\m\2\q\i\4\f\0\i\s\y\e\h\7\x\8\3\6\o\g\n\h\4\6\s\j\n\a\o\9 ]] 00:26:46.928 05:24:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:46.928 05:24:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:47.187 [2024-07-26 05:24:06.059546] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:47.187 [2024-07-26 05:24:06.059716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89009 ] 00:26:47.187 [2024-07-26 05:24:06.228403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.446 [2024-07-26 05:24:06.377647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.642  Copying: 512/512 [B] (average 500 kBps) 00:26:48.642 00:26:48.642 05:24:07 -- dd/posix.sh@93 -- # [[ 0a622nkl1svb9kibpmleoxuqaimq1hr1hl8xfj5msbs8i7pewdu6w2xkk8ux2r4ys493go8xwmgljtaona5ikyg7k2tdyta1yhnzoy0sr1lti2pe1ri7k0iux6tyn9k0be0x5qny26di29r237hcindivjhq3gnx41ie7bleptwwtf0eq711z7kynni48oaxp3f7rury1hg2gpzbnxsdhrv4p6cg781dx3g7x8zr4ujnc7dcqj3potpzko2i46hzr7u6fmj3ik6tsia2xffmwrlq6s9s7rfg9m37yowxnkibpzadeq9kpn9yljf9xeodhwumelsuufnhfpvbglbsol7qlud96zgsqw6lwhoagp0vukkacekl6p2f0cgmeg9dgf286dkrr23i4lf4vsq0d0xkgz39lf0dqxcbhn5n2mujjpsxgfl6gh20dn1aln4qlj5acrkx5ppf08byb67bf3ryscw2qa4a7rxm2qi4f0isyeh7x836ognh46sjnao9 == \0\a\6\2\2\n\k\l\1\s\v\b\9\k\i\b\p\m\l\e\o\x\u\q\a\i\m\q\1\h\r\1\h\l\8\x\f\j\5\m\s\b\s\8\i\7\p\e\w\d\u\6\w\2\x\k\k\8\u\x\2\r\4\y\s\4\9\3\g\o\8\x\w\m\g\l\j\t\a\o\n\a\5\i\k\y\g\7\k\2\t\d\y\t\a\1\y\h\n\z\o\y\0\s\r\1\l\t\i\2\p\e\1\r\i\7\k\0\i\u\x\6\t\y\n\9\k\0\b\e\0\x\5\q\n\y\2\6\d\i\2\9\r\2\3\7\h\c\i\n\d\i\v\j\h\q\3\g\n\x\4\1\i\e\7\b\l\e\p\t\w\w\t\f\0\e\q\7\1\1\z\7\k\y\n\n\i\4\8\o\a\x\p\3\f\7\r\u\r\y\1\h\g\2\g\p\z\b\n\x\s\d\h\r\v\4\p\6\c\g\7\8\1\d\x\3\g\7\x\8\z\r\4\u\j\n\c\7\d\c\q\j\3\p\o\t\p\z\k\o\2\i\4\6\h\z\r\7\u\6\f\m\j\3\i\k\6\t\s\i\a\2\x\f\f\m\w\r\l\q\6\s\9\s\7\r\f\g\9\m\3\7\y\o\w\x\n\k\i\b\p\z\a\d\e\q\9\k\p\n\9\y\l\j\f\9\x\e\o\d\h\w\u\m\e\l\s\u\u\f\n\h\f\p\v\b\g\l\b\s\o\l\7\q\l\u\d\9\6\z\g\s\q\w\6\l\w\h\o\a\g\p\0\v\u\k\k\a\c\e\k\l\6\p\2\f\0\c\g\m\e\g\9\d\g\f\2\8\6\d\k\r\r\2\3\i\4\l\f\4\v\s\q\0\d\0\x\k\g\z\3\9\l\f\0\d\q\x\c\b\h\n\5\n\2\m\u\j\j\p\s\x\g\f\l\6\g\h\2\0\d\n\1\a\l\n\4\q\l\j\5\a\c\r\k\x\5\p\p\f\0\8\b\y\b\6\7\b\f\3\r\y\s\c\w\2\q\a\4\a\7\r\x\m\2\q\i\4\f\0\i\s\y\e\h\7\x\8\3\6\o\g\n\h\4\6\s\j\n\a\o\9 ]] 00:26:48.642 05:24:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:48.642 05:24:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:48.642 [2024-07-26 05:24:07.560111] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:48.642 [2024-07-26 05:24:07.560263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89023 ] 00:26:48.642 [2024-07-26 05:24:07.728814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.902 [2024-07-26 05:24:07.885662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.098  Copying: 512/512 [B] (average 166 kBps) 00:26:50.098 00:26:50.098 05:24:09 -- dd/posix.sh@93 -- # [[ 0a622nkl1svb9kibpmleoxuqaimq1hr1hl8xfj5msbs8i7pewdu6w2xkk8ux2r4ys493go8xwmgljtaona5ikyg7k2tdyta1yhnzoy0sr1lti2pe1ri7k0iux6tyn9k0be0x5qny26di29r237hcindivjhq3gnx41ie7bleptwwtf0eq711z7kynni48oaxp3f7rury1hg2gpzbnxsdhrv4p6cg781dx3g7x8zr4ujnc7dcqj3potpzko2i46hzr7u6fmj3ik6tsia2xffmwrlq6s9s7rfg9m37yowxnkibpzadeq9kpn9yljf9xeodhwumelsuufnhfpvbglbsol7qlud96zgsqw6lwhoagp0vukkacekl6p2f0cgmeg9dgf286dkrr23i4lf4vsq0d0xkgz39lf0dqxcbhn5n2mujjpsxgfl6gh20dn1aln4qlj5acrkx5ppf08byb67bf3ryscw2qa4a7rxm2qi4f0isyeh7x836ognh46sjnao9 == \0\a\6\2\2\n\k\l\1\s\v\b\9\k\i\b\p\m\l\e\o\x\u\q\a\i\m\q\1\h\r\1\h\l\8\x\f\j\5\m\s\b\s\8\i\7\p\e\w\d\u\6\w\2\x\k\k\8\u\x\2\r\4\y\s\4\9\3\g\o\8\x\w\m\g\l\j\t\a\o\n\a\5\i\k\y\g\7\k\2\t\d\y\t\a\1\y\h\n\z\o\y\0\s\r\1\l\t\i\2\p\e\1\r\i\7\k\0\i\u\x\6\t\y\n\9\k\0\b\e\0\x\5\q\n\y\2\6\d\i\2\9\r\2\3\7\h\c\i\n\d\i\v\j\h\q\3\g\n\x\4\1\i\e\7\b\l\e\p\t\w\w\t\f\0\e\q\7\1\1\z\7\k\y\n\n\i\4\8\o\a\x\p\3\f\7\r\u\r\y\1\h\g\2\g\p\z\b\n\x\s\d\h\r\v\4\p\6\c\g\7\8\1\d\x\3\g\7\x\8\z\r\4\u\j\n\c\7\d\c\q\j\3\p\o\t\p\z\k\o\2\i\4\6\h\z\r\7\u\6\f\m\j\3\i\k\6\t\s\i\a\2\x\f\f\m\w\r\l\q\6\s\9\s\7\r\f\g\9\m\3\7\y\o\w\x\n\k\i\b\p\z\a\d\e\q\9\k\p\n\9\y\l\j\f\9\x\e\o\d\h\w\u\m\e\l\s\u\u\f\n\h\f\p\v\b\g\l\b\s\o\l\7\q\l\u\d\9\6\z\g\s\q\w\6\l\w\h\o\a\g\p\0\v\u\k\k\a\c\e\k\l\6\p\2\f\0\c\g\m\e\g\9\d\g\f\2\8\6\d\k\r\r\2\3\i\4\l\f\4\v\s\q\0\d\0\x\k\g\z\3\9\l\f\0\d\q\x\c\b\h\n\5\n\2\m\u\j\j\p\s\x\g\f\l\6\g\h\2\0\d\n\1\a\l\n\4\q\l\j\5\a\c\r\k\x\5\p\p\f\0\8\b\y\b\6\7\b\f\3\r\y\s\c\w\2\q\a\4\a\7\r\x\m\2\q\i\4\f\0\i\s\y\e\h\7\x\8\3\6\o\g\n\h\4\6\s\j\n\a\o\9 ]] 00:26:50.098 05:24:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:50.098 05:24:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:50.098 [2024-07-26 05:24:09.078547] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:50.098 [2024-07-26 05:24:09.078705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89043 ] 00:26:50.357 [2024-07-26 05:24:09.248482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.357 [2024-07-26 05:24:09.399323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.554  Copying: 512/512 [B] (average 125 kBps) 00:26:51.554 00:26:51.554 ************************************ 00:26:51.554 END TEST dd_flags_misc 00:26:51.554 ************************************ 00:26:51.554 05:24:10 -- dd/posix.sh@93 -- # [[ 0a622nkl1svb9kibpmleoxuqaimq1hr1hl8xfj5msbs8i7pewdu6w2xkk8ux2r4ys493go8xwmgljtaona5ikyg7k2tdyta1yhnzoy0sr1lti2pe1ri7k0iux6tyn9k0be0x5qny26di29r237hcindivjhq3gnx41ie7bleptwwtf0eq711z7kynni48oaxp3f7rury1hg2gpzbnxsdhrv4p6cg781dx3g7x8zr4ujnc7dcqj3potpzko2i46hzr7u6fmj3ik6tsia2xffmwrlq6s9s7rfg9m37yowxnkibpzadeq9kpn9yljf9xeodhwumelsuufnhfpvbglbsol7qlud96zgsqw6lwhoagp0vukkacekl6p2f0cgmeg9dgf286dkrr23i4lf4vsq0d0xkgz39lf0dqxcbhn5n2mujjpsxgfl6gh20dn1aln4qlj5acrkx5ppf08byb67bf3ryscw2qa4a7rxm2qi4f0isyeh7x836ognh46sjnao9 == \0\a\6\2\2\n\k\l\1\s\v\b\9\k\i\b\p\m\l\e\o\x\u\q\a\i\m\q\1\h\r\1\h\l\8\x\f\j\5\m\s\b\s\8\i\7\p\e\w\d\u\6\w\2\x\k\k\8\u\x\2\r\4\y\s\4\9\3\g\o\8\x\w\m\g\l\j\t\a\o\n\a\5\i\k\y\g\7\k\2\t\d\y\t\a\1\y\h\n\z\o\y\0\s\r\1\l\t\i\2\p\e\1\r\i\7\k\0\i\u\x\6\t\y\n\9\k\0\b\e\0\x\5\q\n\y\2\6\d\i\2\9\r\2\3\7\h\c\i\n\d\i\v\j\h\q\3\g\n\x\4\1\i\e\7\b\l\e\p\t\w\w\t\f\0\e\q\7\1\1\z\7\k\y\n\n\i\4\8\o\a\x\p\3\f\7\r\u\r\y\1\h\g\2\g\p\z\b\n\x\s\d\h\r\v\4\p\6\c\g\7\8\1\d\x\3\g\7\x\8\z\r\4\u\j\n\c\7\d\c\q\j\3\p\o\t\p\z\k\o\2\i\4\6\h\z\r\7\u\6\f\m\j\3\i\k\6\t\s\i\a\2\x\f\f\m\w\r\l\q\6\s\9\s\7\r\f\g\9\m\3\7\y\o\w\x\n\k\i\b\p\z\a\d\e\q\9\k\p\n\9\y\l\j\f\9\x\e\o\d\h\w\u\m\e\l\s\u\u\f\n\h\f\p\v\b\g\l\b\s\o\l\7\q\l\u\d\9\6\z\g\s\q\w\6\l\w\h\o\a\g\p\0\v\u\k\k\a\c\e\k\l\6\p\2\f\0\c\g\m\e\g\9\d\g\f\2\8\6\d\k\r\r\2\3\i\4\l\f\4\v\s\q\0\d\0\x\k\g\z\3\9\l\f\0\d\q\x\c\b\h\n\5\n\2\m\u\j\j\p\s\x\g\f\l\6\g\h\2\0\d\n\1\a\l\n\4\q\l\j\5\a\c\r\k\x\5\p\p\f\0\8\b\y\b\6\7\b\f\3\r\y\s\c\w\2\q\a\4\a\7\r\x\m\2\q\i\4\f\0\i\s\y\e\h\7\x\8\3\6\o\g\n\h\4\6\s\j\n\a\o\9 ]] 00:26:51.554 00:26:51.554 real 0m12.253s 00:26:51.554 user 0m9.811s 00:26:51.554 sys 0m1.501s 00:26:51.554 05:24:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.554 05:24:10 -- common/autotest_common.sh@10 -- # set +x 00:26:51.554 05:24:10 -- dd/posix.sh@131 -- # tests_forced_aio 00:26:51.554 05:24:10 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:26:51.554 * Second test run, disabling liburing, forcing AIO 00:26:51.554 05:24:10 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:26:51.554 05:24:10 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:26:51.554 05:24:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:51.554 05:24:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:51.554 05:24:10 -- common/autotest_common.sh@10 -- # set +x 00:26:51.554 ************************************ 00:26:51.554 START TEST dd_flag_append_forced_aio 00:26:51.554 ************************************ 00:26:51.554 05:24:10 -- common/autotest_common.sh@1104 -- # append 00:26:51.554 05:24:10 -- dd/posix.sh@16 -- # local dump0 00:26:51.554 05:24:10 -- dd/posix.sh@17 -- # local dump1 00:26:51.554 05:24:10 -- dd/posix.sh@19 -- # gen_bytes 32 00:26:51.554 05:24:10 -- dd/common.sh@98 -- # xtrace_disable 00:26:51.554 05:24:10 -- common/autotest_common.sh@10 -- # set +x 00:26:51.554 05:24:10 -- dd/posix.sh@19 -- # dump0=um5hnb2ointor48hdcwi8gg6c9myt3t3 00:26:51.554 05:24:10 -- dd/posix.sh@20 -- # gen_bytes 32 00:26:51.554 05:24:10 -- dd/common.sh@98 -- # xtrace_disable 00:26:51.554 05:24:10 -- common/autotest_common.sh@10 -- # set +x 00:26:51.554 05:24:10 -- dd/posix.sh@20 -- # dump1=1b2tnldrfauskfj7yo54vlnbjuy32ayi 00:26:51.554 05:24:10 -- dd/posix.sh@22 -- # printf %s um5hnb2ointor48hdcwi8gg6c9myt3t3 00:26:51.554 05:24:10 -- dd/posix.sh@23 -- # printf %s 1b2tnldrfauskfj7yo54vlnbjuy32ayi 00:26:51.554 05:24:10 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:26:51.813 [2024-07-26 05:24:10.672323] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:51.813 [2024-07-26 05:24:10.672473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89082 ] 00:26:51.813 [2024-07-26 05:24:10.841083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.072 [2024-07-26 05:24:10.993434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.304  Copying: 32/32 [B] (average 31 kBps) 00:26:53.304 00:26:53.304 ************************************ 00:26:53.304 END TEST dd_flag_append_forced_aio 00:26:53.304 ************************************ 00:26:53.304 05:24:12 -- dd/posix.sh@27 -- # [[ 1b2tnldrfauskfj7yo54vlnbjuy32ayium5hnb2ointor48hdcwi8gg6c9myt3t3 == \1\b\2\t\n\l\d\r\f\a\u\s\k\f\j\7\y\o\5\4\v\l\n\b\j\u\y\3\2\a\y\i\u\m\5\h\n\b\2\o\i\n\t\o\r\4\8\h\d\c\w\i\8\g\g\6\c\9\m\y\t\3\t\3 ]] 00:26:53.304 00:26:53.304 real 0m1.528s 00:26:53.304 user 0m1.217s 00:26:53.304 sys 0m0.199s 00:26:53.304 05:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.304 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:26:53.304 05:24:12 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:26:53.304 05:24:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:53.304 05:24:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:53.304 05:24:12 -- common/autotest_common.sh@10 -- # set +x 00:26:53.304 ************************************ 00:26:53.304 START TEST dd_flag_directory_forced_aio 00:26:53.304 ************************************ 00:26:53.304 05:24:12 -- common/autotest_common.sh@1104 -- # directory 00:26:53.304 05:24:12 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:53.304 05:24:12 -- common/autotest_common.sh@640 -- # local es=0 00:26:53.304 05:24:12 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:53.304 05:24:12 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:53.304 05:24:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:53.304 05:24:12 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:53.304 05:24:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:53.304 05:24:12 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:53.304 05:24:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:53.304 05:24:12 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:53.304 05:24:12 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:53.304 05:24:12 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:53.304 [2024-07-26 05:24:12.245569] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:53.304 [2024-07-26 05:24:12.245735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89120 ] 00:26:53.564 [2024-07-26 05:24:12.417477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.564 [2024-07-26 05:24:12.568394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.823 [2024-07-26 05:24:12.787479] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:53.823 [2024-07-26 05:24:12.787556] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:53.823 [2024-07-26 05:24:12.787576] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:54.393 [2024-07-26 05:24:13.341255] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:54.651 05:24:13 -- common/autotest_common.sh@643 -- # es=236 00:26:54.651 05:24:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:54.651 05:24:13 -- common/autotest_common.sh@652 -- # es=108 00:26:54.651 05:24:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:54.651 05:24:13 -- common/autotest_common.sh@660 -- # es=1 00:26:54.651 05:24:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:54.651 05:24:13 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:54.651 05:24:13 -- common/autotest_common.sh@640 -- # local es=0 00:26:54.651 05:24:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:54.651 05:24:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:54.651 05:24:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:54.651 05:24:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:54.651 05:24:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:54.651 05:24:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:54.651 05:24:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:54.651 05:24:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:54.651 05:24:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:54.651 05:24:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:54.651 [2024-07-26 05:24:13.743556] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:54.651 [2024-07-26 05:24:13.743710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89142 ] 00:26:54.909 [2024-07-26 05:24:13.912626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.167 [2024-07-26 05:24:14.064068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.167 [2024-07-26 05:24:14.276244] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:55.167 [2024-07-26 05:24:14.276337] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:55.167 [2024-07-26 05:24:14.276358] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:55.734 [2024-07-26 05:24:14.838756] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:56.302 05:24:15 -- common/autotest_common.sh@643 -- # es=236 00:26:56.302 05:24:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:56.302 05:24:15 -- common/autotest_common.sh@652 -- # es=108 00:26:56.302 05:24:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:56.302 05:24:15 -- common/autotest_common.sh@660 -- # es=1 00:26:56.302 05:24:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:56.302 00:26:56.302 real 0m2.995s 00:26:56.302 user 0m2.391s 00:26:56.302 sys 0m0.402s 00:26:56.302 05:24:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.302 05:24:15 -- common/autotest_common.sh@10 -- # set +x 00:26:56.302 ************************************ 00:26:56.302 END TEST dd_flag_directory_forced_aio 00:26:56.302 ************************************ 00:26:56.302 05:24:15 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:26:56.302 05:24:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:56.302 05:24:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:56.302 05:24:15 -- common/autotest_common.sh@10 -- # set +x 00:26:56.302 ************************************ 00:26:56.302 START TEST dd_flag_nofollow_forced_aio 00:26:56.302 ************************************ 00:26:56.302 05:24:15 -- common/autotest_common.sh@1104 -- # nofollow 00:26:56.302 05:24:15 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:56.302 05:24:15 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:56.302 05:24:15 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:56.302 05:24:15 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:56.302 05:24:15 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:56.302 05:24:15 -- common/autotest_common.sh@640 -- # local es=0 00:26:56.302 05:24:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:56.302 05:24:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:56.303 05:24:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:56.303 05:24:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:56.303 05:24:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:56.303 05:24:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:56.303 05:24:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:56.303 05:24:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:56.303 05:24:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:56.303 05:24:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:56.303 [2024-07-26 05:24:15.297745] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:56.303 [2024-07-26 05:24:15.297900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89177 ] 00:26:56.562 [2024-07-26 05:24:15.465064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.562 [2024-07-26 05:24:15.613821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.821 [2024-07-26 05:24:15.844448] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:56.821 [2024-07-26 05:24:15.844521] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:56.821 [2024-07-26 05:24:15.844542] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:57.389 [2024-07-26 05:24:16.386708] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:57.648 05:24:16 -- common/autotest_common.sh@643 -- # es=216 00:26:57.648 05:24:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:57.648 05:24:16 -- common/autotest_common.sh@652 -- # es=88 00:26:57.648 05:24:16 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:57.648 05:24:16 -- common/autotest_common.sh@660 -- # es=1 00:26:57.648 05:24:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:57.648 05:24:16 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:57.648 05:24:16 -- common/autotest_common.sh@640 -- # local es=0 00:26:57.648 05:24:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:57.648 05:24:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:57.649 05:24:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:57.649 05:24:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:57.649 05:24:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:57.649 05:24:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:57.649 05:24:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:57.649 05:24:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:57.649 05:24:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:57.649 05:24:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:57.907 [2024-07-26 05:24:16.789507] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:57.908 [2024-07-26 05:24:16.790186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89198 ] 00:26:57.908 [2024-07-26 05:24:16.960693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.167 [2024-07-26 05:24:17.114696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.426 [2024-07-26 05:24:17.330288] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:58.426 [2024-07-26 05:24:17.330365] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:58.426 [2024-07-26 05:24:17.330387] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:58.994 [2024-07-26 05:24:17.878409] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:59.254 05:24:18 -- common/autotest_common.sh@643 -- # es=216 00:26:59.254 05:24:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:59.254 05:24:18 -- common/autotest_common.sh@652 -- # es=88 00:26:59.254 05:24:18 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:59.254 05:24:18 -- common/autotest_common.sh@660 -- # es=1 00:26:59.254 05:24:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:59.254 05:24:18 -- dd/posix.sh@46 -- # gen_bytes 512 00:26:59.254 05:24:18 -- dd/common.sh@98 -- # xtrace_disable 00:26:59.254 05:24:18 -- common/autotest_common.sh@10 -- # set +x 00:26:59.254 05:24:18 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:59.254 [2024-07-26 05:24:18.299720] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:59.254 [2024-07-26 05:24:18.299884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89218 ] 00:26:59.513 [2024-07-26 05:24:18.468319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.513 [2024-07-26 05:24:18.616142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.709  Copying: 512/512 [B] (average 500 kBps) 00:27:00.709 00:27:00.709 05:24:19 -- dd/posix.sh@49 -- # [[ 4b50fexkwerycrgzjf7g2q0l8kjuh0hviybov7txet1h62710j8wlkz8a0norb8aknih64fig0l3eb42bd8tvbvhixr4cnsbf9368guidpps7fppa0fpxex0ao7oas8ah1r3jm9nbbk67i3qtpv2t8s240ha9lc2w2a50ix3i3fnjuae6pathonsn3lfshoqzazwwueb8cgz50iekdn25ngplhu6uxikl7nm78g9jusx6i2h771d1e854rfhu85226dtopy0o6gjx0zhm3jh1lgfx3xpph21movelivx8ofa6rd2tuief5j9b68cyyscm9r15gwzhbvqakkm5jjdqg4m09spqr1upi9j6tkhu0klnc73dah2igid55mzr5du0lhnagcd1lug155i6qcmpj7vk4sm5r2u00fqjwb0hhxc8k6zil7a9jj9aoyc2ifhx3dj4ff4y4jiobclgfju7d04ghc3udgrputx76jrieymunlyhpzpy7k2dh91ilp4 == \4\b\5\0\f\e\x\k\w\e\r\y\c\r\g\z\j\f\7\g\2\q\0\l\8\k\j\u\h\0\h\v\i\y\b\o\v\7\t\x\e\t\1\h\6\2\7\1\0\j\8\w\l\k\z\8\a\0\n\o\r\b\8\a\k\n\i\h\6\4\f\i\g\0\l\3\e\b\4\2\b\d\8\t\v\b\v\h\i\x\r\4\c\n\s\b\f\9\3\6\8\g\u\i\d\p\p\s\7\f\p\p\a\0\f\p\x\e\x\0\a\o\7\o\a\s\8\a\h\1\r\3\j\m\9\n\b\b\k\6\7\i\3\q\t\p\v\2\t\8\s\2\4\0\h\a\9\l\c\2\w\2\a\5\0\i\x\3\i\3\f\n\j\u\a\e\6\p\a\t\h\o\n\s\n\3\l\f\s\h\o\q\z\a\z\w\w\u\e\b\8\c\g\z\5\0\i\e\k\d\n\2\5\n\g\p\l\h\u\6\u\x\i\k\l\7\n\m\7\8\g\9\j\u\s\x\6\i\2\h\7\7\1\d\1\e\8\5\4\r\f\h\u\8\5\2\2\6\d\t\o\p\y\0\o\6\g\j\x\0\z\h\m\3\j\h\1\l\g\f\x\3\x\p\p\h\2\1\m\o\v\e\l\i\v\x\8\o\f\a\6\r\d\2\t\u\i\e\f\5\j\9\b\6\8\c\y\y\s\c\m\9\r\1\5\g\w\z\h\b\v\q\a\k\k\m\5\j\j\d\q\g\4\m\0\9\s\p\q\r\1\u\p\i\9\j\6\t\k\h\u\0\k\l\n\c\7\3\d\a\h\2\i\g\i\d\5\5\m\z\r\5\d\u\0\l\h\n\a\g\c\d\1\l\u\g\1\5\5\i\6\q\c\m\p\j\7\v\k\4\s\m\5\r\2\u\0\0\f\q\j\w\b\0\h\h\x\c\8\k\6\z\i\l\7\a\9\j\j\9\a\o\y\c\2\i\f\h\x\3\d\j\4\f\f\4\y\4\j\i\o\b\c\l\g\f\j\u\7\d\0\4\g\h\c\3\u\d\g\r\p\u\t\x\7\6\j\r\i\e\y\m\u\n\l\y\h\p\z\p\y\7\k\2\d\h\9\1\i\l\p\4 ]] 00:27:00.709 00:27:00.709 real 0m4.510s 00:27:00.709 user 0m3.631s 00:27:00.709 sys 0m0.562s 00:27:00.709 05:24:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:00.709 05:24:19 -- common/autotest_common.sh@10 -- # set +x 00:27:00.709 ************************************ 00:27:00.709 END TEST dd_flag_nofollow_forced_aio 00:27:00.709 ************************************ 00:27:00.709 05:24:19 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:27:00.709 05:24:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:00.709 05:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:00.709 05:24:19 -- common/autotest_common.sh@10 -- # set +x 00:27:00.709 ************************************ 00:27:00.709 START TEST dd_flag_noatime_forced_aio 00:27:00.709 ************************************ 00:27:00.709 05:24:19 -- common/autotest_common.sh@1104 -- # noatime 00:27:00.709 05:24:19 -- dd/posix.sh@53 -- # local atime_if 00:27:00.709 05:24:19 -- dd/posix.sh@54 -- # local atime_of 00:27:00.709 05:24:19 -- dd/posix.sh@58 -- # gen_bytes 512 00:27:00.709 05:24:19 -- dd/common.sh@98 -- # xtrace_disable 00:27:00.710 05:24:19 -- common/autotest_common.sh@10 -- # set +x 00:27:00.710 05:24:19 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:00.710 05:24:19 -- dd/posix.sh@60 -- # atime_if=1721971458 00:27:00.710 05:24:19 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:00.710 05:24:19 -- dd/posix.sh@61 -- # atime_of=1721971459 00:27:00.710 05:24:19 -- dd/posix.sh@66 -- # sleep 1 00:27:02.087 05:24:20 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:02.087 [2024-07-26 05:24:20.856194] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:02.087 [2024-07-26 05:24:20.856329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89272 ] 00:27:02.087 [2024-07-26 05:24:21.006399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.087 [2024-07-26 05:24:21.155540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.281  Copying: 512/512 [B] (average 500 kBps) 00:27:03.281 00:27:03.281 05:24:22 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:03.281 05:24:22 -- dd/posix.sh@69 -- # (( atime_if == 1721971458 )) 00:27:03.281 05:24:22 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:03.281 05:24:22 -- dd/posix.sh@70 -- # (( atime_of == 1721971459 )) 00:27:03.281 05:24:22 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:03.281 [2024-07-26 05:24:22.346934] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:03.281 [2024-07-26 05:24:22.347152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89290 ] 00:27:03.540 [2024-07-26 05:24:22.514920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.799 [2024-07-26 05:24:22.663800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.735  Copying: 512/512 [B] (average 500 kBps) 00:27:04.735 00:27:04.735 05:24:23 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:04.735 05:24:23 -- dd/posix.sh@73 -- # (( atime_if < 1721971462 )) 00:27:04.735 00:27:04.735 real 0m4.011s 00:27:04.735 user 0m2.423s 00:27:04.735 sys 0m0.360s 00:27:04.735 05:24:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.735 05:24:23 -- common/autotest_common.sh@10 -- # set +x 00:27:04.735 ************************************ 00:27:04.735 END TEST dd_flag_noatime_forced_aio 00:27:04.735 ************************************ 00:27:04.735 05:24:23 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:27:04.735 05:24:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:04.735 05:24:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:04.735 05:24:23 -- common/autotest_common.sh@10 -- # set +x 00:27:04.993 ************************************ 00:27:04.993 START TEST dd_flags_misc_forced_aio 00:27:04.993 ************************************ 00:27:04.993 05:24:23 -- common/autotest_common.sh@1104 -- # io 00:27:04.993 05:24:23 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:27:04.993 05:24:23 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:27:04.993 05:24:23 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:27:04.993 05:24:23 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:04.993 05:24:23 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:04.993 05:24:23 -- dd/common.sh@98 -- # xtrace_disable 00:27:04.993 05:24:23 -- common/autotest_common.sh@10 -- # set +x 00:27:04.993 05:24:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:04.993 05:24:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:04.993 [2024-07-26 05:24:23.917120] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:04.993 [2024-07-26 05:24:23.917274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89329 ] 00:27:04.993 [2024-07-26 05:24:24.084939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.251 [2024-07-26 05:24:24.237837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.444  Copying: 512/512 [B] (average 500 kBps) 00:27:06.444 00:27:06.444 05:24:25 -- dd/posix.sh@93 -- # [[ 1j8mjmv6xh94em47wlsbh0f3qxofrv4vv56kg4odut2kdu17njof84typytk3ow82bmejtvc9hbw1fwkmq58kd33feykb889pf6dydf30w7r7ajygnd9dddnp00sx2qlas1ypcylzlgn6q6yj3foumcangauqpuxky0kxobqs2i6zthwrc01d93hsan96yd9kt04yttxjvi8omzigtzlpwvn3f4npjge4qprgmz4te0njn2397iug2p6iq9jph8bgxudutroz07b1w1wujtiese25h6ggy6p6n6c8hda58wx8n965n6pmu6aa38iuds6qgqrtr30dw50gd56g2v9pc9bogwk3wlgv9jcdjr9dwo7485astj0sjdy3v940b0ev22xgrykwvm1mycdnt61usxnd5mtm61wdaiyrulkpaiqw3y1b3u10919npza1aabzanh5m0z98gytqfdosy5k6qvz9mu8e2dlu621s0uzait9hsp8sm1meuue9u5lwyw == \1\j\8\m\j\m\v\6\x\h\9\4\e\m\4\7\w\l\s\b\h\0\f\3\q\x\o\f\r\v\4\v\v\5\6\k\g\4\o\d\u\t\2\k\d\u\1\7\n\j\o\f\8\4\t\y\p\y\t\k\3\o\w\8\2\b\m\e\j\t\v\c\9\h\b\w\1\f\w\k\m\q\5\8\k\d\3\3\f\e\y\k\b\8\8\9\p\f\6\d\y\d\f\3\0\w\7\r\7\a\j\y\g\n\d\9\d\d\d\n\p\0\0\s\x\2\q\l\a\s\1\y\p\c\y\l\z\l\g\n\6\q\6\y\j\3\f\o\u\m\c\a\n\g\a\u\q\p\u\x\k\y\0\k\x\o\b\q\s\2\i\6\z\t\h\w\r\c\0\1\d\9\3\h\s\a\n\9\6\y\d\9\k\t\0\4\y\t\t\x\j\v\i\8\o\m\z\i\g\t\z\l\p\w\v\n\3\f\4\n\p\j\g\e\4\q\p\r\g\m\z\4\t\e\0\n\j\n\2\3\9\7\i\u\g\2\p\6\i\q\9\j\p\h\8\b\g\x\u\d\u\t\r\o\z\0\7\b\1\w\1\w\u\j\t\i\e\s\e\2\5\h\6\g\g\y\6\p\6\n\6\c\8\h\d\a\5\8\w\x\8\n\9\6\5\n\6\p\m\u\6\a\a\3\8\i\u\d\s\6\q\g\q\r\t\r\3\0\d\w\5\0\g\d\5\6\g\2\v\9\p\c\9\b\o\g\w\k\3\w\l\g\v\9\j\c\d\j\r\9\d\w\o\7\4\8\5\a\s\t\j\0\s\j\d\y\3\v\9\4\0\b\0\e\v\2\2\x\g\r\y\k\w\v\m\1\m\y\c\d\n\t\6\1\u\s\x\n\d\5\m\t\m\6\1\w\d\a\i\y\r\u\l\k\p\a\i\q\w\3\y\1\b\3\u\1\0\9\1\9\n\p\z\a\1\a\a\b\z\a\n\h\5\m\0\z\9\8\g\y\t\q\f\d\o\s\y\5\k\6\q\v\z\9\m\u\8\e\2\d\l\u\6\2\1\s\0\u\z\a\i\t\9\h\s\p\8\s\m\1\m\e\u\u\e\9\u\5\l\w\y\w ]] 00:27:06.444 05:24:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:06.444 05:24:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:06.444 [2024-07-26 05:24:25.424327] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:06.444 [2024-07-26 05:24:25.424487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89343 ] 00:27:06.701 [2024-07-26 05:24:25.593517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.701 [2024-07-26 05:24:25.743513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.905  Copying: 512/512 [B] (average 500 kBps) 00:27:07.905 00:27:07.905 05:24:26 -- dd/posix.sh@93 -- # [[ 1j8mjmv6xh94em47wlsbh0f3qxofrv4vv56kg4odut2kdu17njof84typytk3ow82bmejtvc9hbw1fwkmq58kd33feykb889pf6dydf30w7r7ajygnd9dddnp00sx2qlas1ypcylzlgn6q6yj3foumcangauqpuxky0kxobqs2i6zthwrc01d93hsan96yd9kt04yttxjvi8omzigtzlpwvn3f4npjge4qprgmz4te0njn2397iug2p6iq9jph8bgxudutroz07b1w1wujtiese25h6ggy6p6n6c8hda58wx8n965n6pmu6aa38iuds6qgqrtr30dw50gd56g2v9pc9bogwk3wlgv9jcdjr9dwo7485astj0sjdy3v940b0ev22xgrykwvm1mycdnt61usxnd5mtm61wdaiyrulkpaiqw3y1b3u10919npza1aabzanh5m0z98gytqfdosy5k6qvz9mu8e2dlu621s0uzait9hsp8sm1meuue9u5lwyw == \1\j\8\m\j\m\v\6\x\h\9\4\e\m\4\7\w\l\s\b\h\0\f\3\q\x\o\f\r\v\4\v\v\5\6\k\g\4\o\d\u\t\2\k\d\u\1\7\n\j\o\f\8\4\t\y\p\y\t\k\3\o\w\8\2\b\m\e\j\t\v\c\9\h\b\w\1\f\w\k\m\q\5\8\k\d\3\3\f\e\y\k\b\8\8\9\p\f\6\d\y\d\f\3\0\w\7\r\7\a\j\y\g\n\d\9\d\d\d\n\p\0\0\s\x\2\q\l\a\s\1\y\p\c\y\l\z\l\g\n\6\q\6\y\j\3\f\o\u\m\c\a\n\g\a\u\q\p\u\x\k\y\0\k\x\o\b\q\s\2\i\6\z\t\h\w\r\c\0\1\d\9\3\h\s\a\n\9\6\y\d\9\k\t\0\4\y\t\t\x\j\v\i\8\o\m\z\i\g\t\z\l\p\w\v\n\3\f\4\n\p\j\g\e\4\q\p\r\g\m\z\4\t\e\0\n\j\n\2\3\9\7\i\u\g\2\p\6\i\q\9\j\p\h\8\b\g\x\u\d\u\t\r\o\z\0\7\b\1\w\1\w\u\j\t\i\e\s\e\2\5\h\6\g\g\y\6\p\6\n\6\c\8\h\d\a\5\8\w\x\8\n\9\6\5\n\6\p\m\u\6\a\a\3\8\i\u\d\s\6\q\g\q\r\t\r\3\0\d\w\5\0\g\d\5\6\g\2\v\9\p\c\9\b\o\g\w\k\3\w\l\g\v\9\j\c\d\j\r\9\d\w\o\7\4\8\5\a\s\t\j\0\s\j\d\y\3\v\9\4\0\b\0\e\v\2\2\x\g\r\y\k\w\v\m\1\m\y\c\d\n\t\6\1\u\s\x\n\d\5\m\t\m\6\1\w\d\a\i\y\r\u\l\k\p\a\i\q\w\3\y\1\b\3\u\1\0\9\1\9\n\p\z\a\1\a\a\b\z\a\n\h\5\m\0\z\9\8\g\y\t\q\f\d\o\s\y\5\k\6\q\v\z\9\m\u\8\e\2\d\l\u\6\2\1\s\0\u\z\a\i\t\9\h\s\p\8\s\m\1\m\e\u\u\e\9\u\5\l\w\y\w ]] 00:27:07.905 05:24:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:07.905 05:24:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:07.905 [2024-07-26 05:24:26.934706] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:07.905 [2024-07-26 05:24:26.934864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89361 ] 00:27:08.191 [2024-07-26 05:24:27.104404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.191 [2024-07-26 05:24:27.255862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.404  Copying: 512/512 [B] (average 100 kBps) 00:27:09.404 00:27:09.404 05:24:28 -- dd/posix.sh@93 -- # [[ 1j8mjmv6xh94em47wlsbh0f3qxofrv4vv56kg4odut2kdu17njof84typytk3ow82bmejtvc9hbw1fwkmq58kd33feykb889pf6dydf30w7r7ajygnd9dddnp00sx2qlas1ypcylzlgn6q6yj3foumcangauqpuxky0kxobqs2i6zthwrc01d93hsan96yd9kt04yttxjvi8omzigtzlpwvn3f4npjge4qprgmz4te0njn2397iug2p6iq9jph8bgxudutroz07b1w1wujtiese25h6ggy6p6n6c8hda58wx8n965n6pmu6aa38iuds6qgqrtr30dw50gd56g2v9pc9bogwk3wlgv9jcdjr9dwo7485astj0sjdy3v940b0ev22xgrykwvm1mycdnt61usxnd5mtm61wdaiyrulkpaiqw3y1b3u10919npza1aabzanh5m0z98gytqfdosy5k6qvz9mu8e2dlu621s0uzait9hsp8sm1meuue9u5lwyw == \1\j\8\m\j\m\v\6\x\h\9\4\e\m\4\7\w\l\s\b\h\0\f\3\q\x\o\f\r\v\4\v\v\5\6\k\g\4\o\d\u\t\2\k\d\u\1\7\n\j\o\f\8\4\t\y\p\y\t\k\3\o\w\8\2\b\m\e\j\t\v\c\9\h\b\w\1\f\w\k\m\q\5\8\k\d\3\3\f\e\y\k\b\8\8\9\p\f\6\d\y\d\f\3\0\w\7\r\7\a\j\y\g\n\d\9\d\d\d\n\p\0\0\s\x\2\q\l\a\s\1\y\p\c\y\l\z\l\g\n\6\q\6\y\j\3\f\o\u\m\c\a\n\g\a\u\q\p\u\x\k\y\0\k\x\o\b\q\s\2\i\6\z\t\h\w\r\c\0\1\d\9\3\h\s\a\n\9\6\y\d\9\k\t\0\4\y\t\t\x\j\v\i\8\o\m\z\i\g\t\z\l\p\w\v\n\3\f\4\n\p\j\g\e\4\q\p\r\g\m\z\4\t\e\0\n\j\n\2\3\9\7\i\u\g\2\p\6\i\q\9\j\p\h\8\b\g\x\u\d\u\t\r\o\z\0\7\b\1\w\1\w\u\j\t\i\e\s\e\2\5\h\6\g\g\y\6\p\6\n\6\c\8\h\d\a\5\8\w\x\8\n\9\6\5\n\6\p\m\u\6\a\a\3\8\i\u\d\s\6\q\g\q\r\t\r\3\0\d\w\5\0\g\d\5\6\g\2\v\9\p\c\9\b\o\g\w\k\3\w\l\g\v\9\j\c\d\j\r\9\d\w\o\7\4\8\5\a\s\t\j\0\s\j\d\y\3\v\9\4\0\b\0\e\v\2\2\x\g\r\y\k\w\v\m\1\m\y\c\d\n\t\6\1\u\s\x\n\d\5\m\t\m\6\1\w\d\a\i\y\r\u\l\k\p\a\i\q\w\3\y\1\b\3\u\1\0\9\1\9\n\p\z\a\1\a\a\b\z\a\n\h\5\m\0\z\9\8\g\y\t\q\f\d\o\s\y\5\k\6\q\v\z\9\m\u\8\e\2\d\l\u\6\2\1\s\0\u\z\a\i\t\9\h\s\p\8\s\m\1\m\e\u\u\e\9\u\5\l\w\y\w ]] 00:27:09.404 05:24:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:09.404 05:24:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:09.404 [2024-07-26 05:24:28.451609] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:09.404 [2024-07-26 05:24:28.451773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89382 ] 00:27:09.663 [2024-07-26 05:24:28.620211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.663 [2024-07-26 05:24:28.768645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.857  Copying: 512/512 [B] (average 166 kBps) 00:27:10.857 00:27:10.857 05:24:29 -- dd/posix.sh@93 -- # [[ 1j8mjmv6xh94em47wlsbh0f3qxofrv4vv56kg4odut2kdu17njof84typytk3ow82bmejtvc9hbw1fwkmq58kd33feykb889pf6dydf30w7r7ajygnd9dddnp00sx2qlas1ypcylzlgn6q6yj3foumcangauqpuxky0kxobqs2i6zthwrc01d93hsan96yd9kt04yttxjvi8omzigtzlpwvn3f4npjge4qprgmz4te0njn2397iug2p6iq9jph8bgxudutroz07b1w1wujtiese25h6ggy6p6n6c8hda58wx8n965n6pmu6aa38iuds6qgqrtr30dw50gd56g2v9pc9bogwk3wlgv9jcdjr9dwo7485astj0sjdy3v940b0ev22xgrykwvm1mycdnt61usxnd5mtm61wdaiyrulkpaiqw3y1b3u10919npza1aabzanh5m0z98gytqfdosy5k6qvz9mu8e2dlu621s0uzait9hsp8sm1meuue9u5lwyw == \1\j\8\m\j\m\v\6\x\h\9\4\e\m\4\7\w\l\s\b\h\0\f\3\q\x\o\f\r\v\4\v\v\5\6\k\g\4\o\d\u\t\2\k\d\u\1\7\n\j\o\f\8\4\t\y\p\y\t\k\3\o\w\8\2\b\m\e\j\t\v\c\9\h\b\w\1\f\w\k\m\q\5\8\k\d\3\3\f\e\y\k\b\8\8\9\p\f\6\d\y\d\f\3\0\w\7\r\7\a\j\y\g\n\d\9\d\d\d\n\p\0\0\s\x\2\q\l\a\s\1\y\p\c\y\l\z\l\g\n\6\q\6\y\j\3\f\o\u\m\c\a\n\g\a\u\q\p\u\x\k\y\0\k\x\o\b\q\s\2\i\6\z\t\h\w\r\c\0\1\d\9\3\h\s\a\n\9\6\y\d\9\k\t\0\4\y\t\t\x\j\v\i\8\o\m\z\i\g\t\z\l\p\w\v\n\3\f\4\n\p\j\g\e\4\q\p\r\g\m\z\4\t\e\0\n\j\n\2\3\9\7\i\u\g\2\p\6\i\q\9\j\p\h\8\b\g\x\u\d\u\t\r\o\z\0\7\b\1\w\1\w\u\j\t\i\e\s\e\2\5\h\6\g\g\y\6\p\6\n\6\c\8\h\d\a\5\8\w\x\8\n\9\6\5\n\6\p\m\u\6\a\a\3\8\i\u\d\s\6\q\g\q\r\t\r\3\0\d\w\5\0\g\d\5\6\g\2\v\9\p\c\9\b\o\g\w\k\3\w\l\g\v\9\j\c\d\j\r\9\d\w\o\7\4\8\5\a\s\t\j\0\s\j\d\y\3\v\9\4\0\b\0\e\v\2\2\x\g\r\y\k\w\v\m\1\m\y\c\d\n\t\6\1\u\s\x\n\d\5\m\t\m\6\1\w\d\a\i\y\r\u\l\k\p\a\i\q\w\3\y\1\b\3\u\1\0\9\1\9\n\p\z\a\1\a\a\b\z\a\n\h\5\m\0\z\9\8\g\y\t\q\f\d\o\s\y\5\k\6\q\v\z\9\m\u\8\e\2\d\l\u\6\2\1\s\0\u\z\a\i\t\9\h\s\p\8\s\m\1\m\e\u\u\e\9\u\5\l\w\y\w ]] 00:27:10.857 05:24:29 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:10.857 05:24:29 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:10.857 05:24:29 -- dd/common.sh@98 -- # xtrace_disable 00:27:10.857 05:24:29 -- common/autotest_common.sh@10 -- # set +x 00:27:10.857 05:24:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:10.857 05:24:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:10.857 [2024-07-26 05:24:29.963453] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:10.857 [2024-07-26 05:24:29.963605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89396 ] 00:27:11.115 [2024-07-26 05:24:30.133946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.373 [2024-07-26 05:24:30.282670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.568  Copying: 512/512 [B] (average 500 kBps) 00:27:12.568 00:27:12.568 05:24:31 -- dd/posix.sh@93 -- # [[ px4vat5vmeqfaqneqmwielkp15klds62o7pwm7bs8h329x5ckhvagbzsej7ss7kflnw7bv0v45dpjjzpkosx0um2wdb39v5cehyhvd5qadgik5hhdx3s4w7rpz3eypfqbepeh1vj1r1xluluya7ogozn7899d7iekf9l5ia8k96ieg0sfadxtoedipdtnx9gkxo1d0yzrx8qce2xsqt40vsnjy65vmxuolnnsqs1ia1v9g4wm1qgwgth9w9suo0cnvrvhd1o8tb6b8uvwnesh0x8si1pgrgdlle610r8m2hbdt2yko4x8qtho66iokrkpq49qojtw9wpmnwpycnzhopi57d88ctc99xq1fml3zzvranx19n4c69t4cmwxf1elcodltxs5i6munsew6l2m09fi05kvfg21daosquxtm4u3h3ilocv94xt8ecx7g230h0qorn89v01p4gnla6p1eqx9vsx4ijb8w32nq6uo72m5dc4atc5yf9fe0j76ntd == \p\x\4\v\a\t\5\v\m\e\q\f\a\q\n\e\q\m\w\i\e\l\k\p\1\5\k\l\d\s\6\2\o\7\p\w\m\7\b\s\8\h\3\2\9\x\5\c\k\h\v\a\g\b\z\s\e\j\7\s\s\7\k\f\l\n\w\7\b\v\0\v\4\5\d\p\j\j\z\p\k\o\s\x\0\u\m\2\w\d\b\3\9\v\5\c\e\h\y\h\v\d\5\q\a\d\g\i\k\5\h\h\d\x\3\s\4\w\7\r\p\z\3\e\y\p\f\q\b\e\p\e\h\1\v\j\1\r\1\x\l\u\l\u\y\a\7\o\g\o\z\n\7\8\9\9\d\7\i\e\k\f\9\l\5\i\a\8\k\9\6\i\e\g\0\s\f\a\d\x\t\o\e\d\i\p\d\t\n\x\9\g\k\x\o\1\d\0\y\z\r\x\8\q\c\e\2\x\s\q\t\4\0\v\s\n\j\y\6\5\v\m\x\u\o\l\n\n\s\q\s\1\i\a\1\v\9\g\4\w\m\1\q\g\w\g\t\h\9\w\9\s\u\o\0\c\n\v\r\v\h\d\1\o\8\t\b\6\b\8\u\v\w\n\e\s\h\0\x\8\s\i\1\p\g\r\g\d\l\l\e\6\1\0\r\8\m\2\h\b\d\t\2\y\k\o\4\x\8\q\t\h\o\6\6\i\o\k\r\k\p\q\4\9\q\o\j\t\w\9\w\p\m\n\w\p\y\c\n\z\h\o\p\i\5\7\d\8\8\c\t\c\9\9\x\q\1\f\m\l\3\z\z\v\r\a\n\x\1\9\n\4\c\6\9\t\4\c\m\w\x\f\1\e\l\c\o\d\l\t\x\s\5\i\6\m\u\n\s\e\w\6\l\2\m\0\9\f\i\0\5\k\v\f\g\2\1\d\a\o\s\q\u\x\t\m\4\u\3\h\3\i\l\o\c\v\9\4\x\t\8\e\c\x\7\g\2\3\0\h\0\q\o\r\n\8\9\v\0\1\p\4\g\n\l\a\6\p\1\e\q\x\9\v\s\x\4\i\j\b\8\w\3\2\n\q\6\u\o\7\2\m\5\d\c\4\a\t\c\5\y\f\9\f\e\0\j\7\6\n\t\d ]] 00:27:12.568 05:24:31 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:12.568 05:24:31 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:12.568 [2024-07-26 05:24:31.475783] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:12.568 [2024-07-26 05:24:31.475939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89416 ] 00:27:12.568 [2024-07-26 05:24:31.650972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.827 [2024-07-26 05:24:31.814322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.021  Copying: 512/512 [B] (average 500 kBps) 00:27:14.021 00:27:14.021 05:24:32 -- dd/posix.sh@93 -- # [[ px4vat5vmeqfaqneqmwielkp15klds62o7pwm7bs8h329x5ckhvagbzsej7ss7kflnw7bv0v45dpjjzpkosx0um2wdb39v5cehyhvd5qadgik5hhdx3s4w7rpz3eypfqbepeh1vj1r1xluluya7ogozn7899d7iekf9l5ia8k96ieg0sfadxtoedipdtnx9gkxo1d0yzrx8qce2xsqt40vsnjy65vmxuolnnsqs1ia1v9g4wm1qgwgth9w9suo0cnvrvhd1o8tb6b8uvwnesh0x8si1pgrgdlle610r8m2hbdt2yko4x8qtho66iokrkpq49qojtw9wpmnwpycnzhopi57d88ctc99xq1fml3zzvranx19n4c69t4cmwxf1elcodltxs5i6munsew6l2m09fi05kvfg21daosquxtm4u3h3ilocv94xt8ecx7g230h0qorn89v01p4gnla6p1eqx9vsx4ijb8w32nq6uo72m5dc4atc5yf9fe0j76ntd == \p\x\4\v\a\t\5\v\m\e\q\f\a\q\n\e\q\m\w\i\e\l\k\p\1\5\k\l\d\s\6\2\o\7\p\w\m\7\b\s\8\h\3\2\9\x\5\c\k\h\v\a\g\b\z\s\e\j\7\s\s\7\k\f\l\n\w\7\b\v\0\v\4\5\d\p\j\j\z\p\k\o\s\x\0\u\m\2\w\d\b\3\9\v\5\c\e\h\y\h\v\d\5\q\a\d\g\i\k\5\h\h\d\x\3\s\4\w\7\r\p\z\3\e\y\p\f\q\b\e\p\e\h\1\v\j\1\r\1\x\l\u\l\u\y\a\7\o\g\o\z\n\7\8\9\9\d\7\i\e\k\f\9\l\5\i\a\8\k\9\6\i\e\g\0\s\f\a\d\x\t\o\e\d\i\p\d\t\n\x\9\g\k\x\o\1\d\0\y\z\r\x\8\q\c\e\2\x\s\q\t\4\0\v\s\n\j\y\6\5\v\m\x\u\o\l\n\n\s\q\s\1\i\a\1\v\9\g\4\w\m\1\q\g\w\g\t\h\9\w\9\s\u\o\0\c\n\v\r\v\h\d\1\o\8\t\b\6\b\8\u\v\w\n\e\s\h\0\x\8\s\i\1\p\g\r\g\d\l\l\e\6\1\0\r\8\m\2\h\b\d\t\2\y\k\o\4\x\8\q\t\h\o\6\6\i\o\k\r\k\p\q\4\9\q\o\j\t\w\9\w\p\m\n\w\p\y\c\n\z\h\o\p\i\5\7\d\8\8\c\t\c\9\9\x\q\1\f\m\l\3\z\z\v\r\a\n\x\1\9\n\4\c\6\9\t\4\c\m\w\x\f\1\e\l\c\o\d\l\t\x\s\5\i\6\m\u\n\s\e\w\6\l\2\m\0\9\f\i\0\5\k\v\f\g\2\1\d\a\o\s\q\u\x\t\m\4\u\3\h\3\i\l\o\c\v\9\4\x\t\8\e\c\x\7\g\2\3\0\h\0\q\o\r\n\8\9\v\0\1\p\4\g\n\l\a\6\p\1\e\q\x\9\v\s\x\4\i\j\b\8\w\3\2\n\q\6\u\o\7\2\m\5\d\c\4\a\t\c\5\y\f\9\f\e\0\j\7\6\n\t\d ]] 00:27:14.021 05:24:32 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:14.021 05:24:32 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:14.021 [2024-07-26 05:24:32.986021] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:14.021 [2024-07-26 05:24:32.986146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89434 ] 00:27:14.280 [2024-07-26 05:24:33.138090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.280 [2024-07-26 05:24:33.284107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.471  Copying: 512/512 [B] (average 100 kBps) 00:27:15.471 00:27:15.471 05:24:34 -- dd/posix.sh@93 -- # [[ px4vat5vmeqfaqneqmwielkp15klds62o7pwm7bs8h329x5ckhvagbzsej7ss7kflnw7bv0v45dpjjzpkosx0um2wdb39v5cehyhvd5qadgik5hhdx3s4w7rpz3eypfqbepeh1vj1r1xluluya7ogozn7899d7iekf9l5ia8k96ieg0sfadxtoedipdtnx9gkxo1d0yzrx8qce2xsqt40vsnjy65vmxuolnnsqs1ia1v9g4wm1qgwgth9w9suo0cnvrvhd1o8tb6b8uvwnesh0x8si1pgrgdlle610r8m2hbdt2yko4x8qtho66iokrkpq49qojtw9wpmnwpycnzhopi57d88ctc99xq1fml3zzvranx19n4c69t4cmwxf1elcodltxs5i6munsew6l2m09fi05kvfg21daosquxtm4u3h3ilocv94xt8ecx7g230h0qorn89v01p4gnla6p1eqx9vsx4ijb8w32nq6uo72m5dc4atc5yf9fe0j76ntd == \p\x\4\v\a\t\5\v\m\e\q\f\a\q\n\e\q\m\w\i\e\l\k\p\1\5\k\l\d\s\6\2\o\7\p\w\m\7\b\s\8\h\3\2\9\x\5\c\k\h\v\a\g\b\z\s\e\j\7\s\s\7\k\f\l\n\w\7\b\v\0\v\4\5\d\p\j\j\z\p\k\o\s\x\0\u\m\2\w\d\b\3\9\v\5\c\e\h\y\h\v\d\5\q\a\d\g\i\k\5\h\h\d\x\3\s\4\w\7\r\p\z\3\e\y\p\f\q\b\e\p\e\h\1\v\j\1\r\1\x\l\u\l\u\y\a\7\o\g\o\z\n\7\8\9\9\d\7\i\e\k\f\9\l\5\i\a\8\k\9\6\i\e\g\0\s\f\a\d\x\t\o\e\d\i\p\d\t\n\x\9\g\k\x\o\1\d\0\y\z\r\x\8\q\c\e\2\x\s\q\t\4\0\v\s\n\j\y\6\5\v\m\x\u\o\l\n\n\s\q\s\1\i\a\1\v\9\g\4\w\m\1\q\g\w\g\t\h\9\w\9\s\u\o\0\c\n\v\r\v\h\d\1\o\8\t\b\6\b\8\u\v\w\n\e\s\h\0\x\8\s\i\1\p\g\r\g\d\l\l\e\6\1\0\r\8\m\2\h\b\d\t\2\y\k\o\4\x\8\q\t\h\o\6\6\i\o\k\r\k\p\q\4\9\q\o\j\t\w\9\w\p\m\n\w\p\y\c\n\z\h\o\p\i\5\7\d\8\8\c\t\c\9\9\x\q\1\f\m\l\3\z\z\v\r\a\n\x\1\9\n\4\c\6\9\t\4\c\m\w\x\f\1\e\l\c\o\d\l\t\x\s\5\i\6\m\u\n\s\e\w\6\l\2\m\0\9\f\i\0\5\k\v\f\g\2\1\d\a\o\s\q\u\x\t\m\4\u\3\h\3\i\l\o\c\v\9\4\x\t\8\e\c\x\7\g\2\3\0\h\0\q\o\r\n\8\9\v\0\1\p\4\g\n\l\a\6\p\1\e\q\x\9\v\s\x\4\i\j\b\8\w\3\2\n\q\6\u\o\7\2\m\5\d\c\4\a\t\c\5\y\f\9\f\e\0\j\7\6\n\t\d ]] 00:27:15.471 05:24:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:15.472 05:24:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:15.472 [2024-07-26 05:24:34.483989] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:15.472 [2024-07-26 05:24:34.484172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89455 ] 00:27:15.730 [2024-07-26 05:24:34.652845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.730 [2024-07-26 05:24:34.802888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.927  Copying: 512/512 [B] (average 166 kBps) 00:27:16.927 00:27:16.927 05:24:35 -- dd/posix.sh@93 -- # [[ px4vat5vmeqfaqneqmwielkp15klds62o7pwm7bs8h329x5ckhvagbzsej7ss7kflnw7bv0v45dpjjzpkosx0um2wdb39v5cehyhvd5qadgik5hhdx3s4w7rpz3eypfqbepeh1vj1r1xluluya7ogozn7899d7iekf9l5ia8k96ieg0sfadxtoedipdtnx9gkxo1d0yzrx8qce2xsqt40vsnjy65vmxuolnnsqs1ia1v9g4wm1qgwgth9w9suo0cnvrvhd1o8tb6b8uvwnesh0x8si1pgrgdlle610r8m2hbdt2yko4x8qtho66iokrkpq49qojtw9wpmnwpycnzhopi57d88ctc99xq1fml3zzvranx19n4c69t4cmwxf1elcodltxs5i6munsew6l2m09fi05kvfg21daosquxtm4u3h3ilocv94xt8ecx7g230h0qorn89v01p4gnla6p1eqx9vsx4ijb8w32nq6uo72m5dc4atc5yf9fe0j76ntd == \p\x\4\v\a\t\5\v\m\e\q\f\a\q\n\e\q\m\w\i\e\l\k\p\1\5\k\l\d\s\6\2\o\7\p\w\m\7\b\s\8\h\3\2\9\x\5\c\k\h\v\a\g\b\z\s\e\j\7\s\s\7\k\f\l\n\w\7\b\v\0\v\4\5\d\p\j\j\z\p\k\o\s\x\0\u\m\2\w\d\b\3\9\v\5\c\e\h\y\h\v\d\5\q\a\d\g\i\k\5\h\h\d\x\3\s\4\w\7\r\p\z\3\e\y\p\f\q\b\e\p\e\h\1\v\j\1\r\1\x\l\u\l\u\y\a\7\o\g\o\z\n\7\8\9\9\d\7\i\e\k\f\9\l\5\i\a\8\k\9\6\i\e\g\0\s\f\a\d\x\t\o\e\d\i\p\d\t\n\x\9\g\k\x\o\1\d\0\y\z\r\x\8\q\c\e\2\x\s\q\t\4\0\v\s\n\j\y\6\5\v\m\x\u\o\l\n\n\s\q\s\1\i\a\1\v\9\g\4\w\m\1\q\g\w\g\t\h\9\w\9\s\u\o\0\c\n\v\r\v\h\d\1\o\8\t\b\6\b\8\u\v\w\n\e\s\h\0\x\8\s\i\1\p\g\r\g\d\l\l\e\6\1\0\r\8\m\2\h\b\d\t\2\y\k\o\4\x\8\q\t\h\o\6\6\i\o\k\r\k\p\q\4\9\q\o\j\t\w\9\w\p\m\n\w\p\y\c\n\z\h\o\p\i\5\7\d\8\8\c\t\c\9\9\x\q\1\f\m\l\3\z\z\v\r\a\n\x\1\9\n\4\c\6\9\t\4\c\m\w\x\f\1\e\l\c\o\d\l\t\x\s\5\i\6\m\u\n\s\e\w\6\l\2\m\0\9\f\i\0\5\k\v\f\g\2\1\d\a\o\s\q\u\x\t\m\4\u\3\h\3\i\l\o\c\v\9\4\x\t\8\e\c\x\7\g\2\3\0\h\0\q\o\r\n\8\9\v\0\1\p\4\g\n\l\a\6\p\1\e\q\x\9\v\s\x\4\i\j\b\8\w\3\2\n\q\6\u\o\7\2\m\5\d\c\4\a\t\c\5\y\f\9\f\e\0\j\7\6\n\t\d ]] 00:27:16.927 00:27:16.927 real 0m12.089s 00:27:16.927 user 0m9.675s 00:27:16.927 sys 0m1.470s 00:27:16.927 05:24:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.927 ************************************ 00:27:16.927 END TEST dd_flags_misc_forced_aio 00:27:16.927 ************************************ 00:27:16.927 05:24:35 -- common/autotest_common.sh@10 -- # set +x 00:27:16.927 05:24:35 -- dd/posix.sh@1 -- # cleanup 00:27:16.927 05:24:35 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:16.927 05:24:35 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:16.927 00:27:16.927 real 0m51.037s 00:27:16.927 user 0m38.977s 00:27:16.927 sys 0m6.424s 00:27:16.927 05:24:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.927 05:24:35 -- common/autotest_common.sh@10 -- # set +x 00:27:16.927 ************************************ 00:27:16.927 END TEST spdk_dd_posix 00:27:16.927 ************************************ 00:27:16.927 05:24:36 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:27:17.187 05:24:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:17.187 05:24:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:17.187 05:24:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 ************************************ 00:27:17.187 START TEST spdk_dd_malloc 00:27:17.187 ************************************ 00:27:17.187 05:24:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:27:17.187 * Looking for test storage... 00:27:17.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:17.187 05:24:36 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:17.187 05:24:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.187 05:24:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.187 05:24:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.187 05:24:36 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.187 05:24:36 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.187 05:24:36 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.187 05:24:36 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.187 05:24:36 -- paths/export.sh@6 -- # export PATH 00:27:17.187 05:24:36 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.187 05:24:36 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:27:17.187 05:24:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:17.187 05:24:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:17.187 05:24:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 ************************************ 00:27:17.187 START TEST dd_malloc_copy 00:27:17.187 ************************************ 00:27:17.187 05:24:36 -- common/autotest_common.sh@1104 -- # malloc_copy 00:27:17.187 05:24:36 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:27:17.187 05:24:36 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:27:17.187 05:24:36 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:27:17.187 05:24:36 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:27:17.187 05:24:36 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:27:17.187 05:24:36 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:27:17.187 05:24:36 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:27:17.187 05:24:36 -- dd/malloc.sh@28 -- # gen_conf 00:27:17.187 05:24:36 -- dd/common.sh@31 -- # xtrace_disable 00:27:17.187 05:24:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 { 00:27:17.187 "subsystems": [ 00:27:17.187 { 00:27:17.187 "subsystem": "bdev", 00:27:17.187 "config": [ 00:27:17.187 { 00:27:17.187 "params": { 00:27:17.187 "block_size": 512, 00:27:17.187 "num_blocks": 1048576, 00:27:17.187 "name": "malloc0" 00:27:17.187 }, 00:27:17.187 "method": "bdev_malloc_create" 00:27:17.187 }, 00:27:17.187 { 00:27:17.187 "params": { 00:27:17.187 "block_size": 512, 00:27:17.187 "num_blocks": 1048576, 00:27:17.187 "name": "malloc1" 00:27:17.187 }, 00:27:17.187 "method": "bdev_malloc_create" 00:27:17.187 }, 00:27:17.187 { 00:27:17.187 "method": "bdev_wait_for_examine" 00:27:17.187 } 00:27:17.187 ] 00:27:17.187 } 00:27:17.187 ] 00:27:17.187 } 00:27:17.187 [2024-07-26 05:24:36.199982] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:17.187 [2024-07-26 05:24:36.200196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89535 ] 00:27:17.446 [2024-07-26 05:24:36.368340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.446 [2024-07-26 05:24:36.517232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.092  Copying: 210/512 [MB] (210 MBps) Copying: 421/512 [MB] (210 MBps) Copying: 512/512 [MB] (average 211 MBps) 00:27:24.092 00:27:24.092 05:24:42 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:27:24.092 05:24:42 -- dd/malloc.sh@33 -- # gen_conf 00:27:24.092 05:24:42 -- dd/common.sh@31 -- # xtrace_disable 00:27:24.092 05:24:42 -- common/autotest_common.sh@10 -- # set +x 00:27:24.092 { 00:27:24.092 "subsystems": [ 00:27:24.092 { 00:27:24.092 "subsystem": "bdev", 00:27:24.092 "config": [ 00:27:24.092 { 00:27:24.092 "params": { 00:27:24.092 "block_size": 512, 00:27:24.092 "num_blocks": 1048576, 00:27:24.092 "name": "malloc0" 00:27:24.092 }, 00:27:24.092 "method": "bdev_malloc_create" 00:27:24.092 }, 00:27:24.092 { 00:27:24.092 "params": { 00:27:24.092 "block_size": 512, 00:27:24.092 "num_blocks": 1048576, 00:27:24.092 "name": "malloc1" 00:27:24.092 }, 00:27:24.092 "method": "bdev_malloc_create" 00:27:24.092 }, 00:27:24.092 { 00:27:24.092 "method": "bdev_wait_for_examine" 00:27:24.092 } 00:27:24.092 ] 00:27:24.092 } 00:27:24.092 ] 00:27:24.092 } 00:27:24.092 [2024-07-26 05:24:42.792625] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:24.092 [2024-07-26 05:24:42.792777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89610 ] 00:27:24.092 [2024-07-26 05:24:42.961444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.092 [2024-07-26 05:24:43.109457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.849  Copying: 213/512 [MB] (213 MBps) Copying: 422/512 [MB] (209 MBps) Copying: 512/512 [MB] (average 211 MBps) 00:27:30.849 00:27:30.849 00:27:30.849 real 0m13.180s 00:27:30.849 user 0m12.105s 00:27:30.849 sys 0m0.873s 00:27:30.849 05:24:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.849 05:24:49 -- common/autotest_common.sh@10 -- # set +x 00:27:30.849 ************************************ 00:27:30.849 END TEST dd_malloc_copy 00:27:30.849 ************************************ 00:27:30.849 00:27:30.849 real 0m13.306s 00:27:30.849 user 0m12.155s 00:27:30.849 sys 0m0.956s 00:27:30.849 05:24:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.849 ************************************ 00:27:30.849 END TEST spdk_dd_malloc 00:27:30.849 ************************************ 00:27:30.849 05:24:49 -- common/autotest_common.sh@10 -- # set +x 00:27:30.849 05:24:49 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:27:30.849 05:24:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:30.849 05:24:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:30.849 05:24:49 -- common/autotest_common.sh@10 -- # set +x 00:27:30.849 ************************************ 00:27:30.849 START TEST spdk_dd_bdev_to_bdev 00:27:30.849 ************************************ 00:27:30.849 05:24:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:27:30.849 * Looking for test storage... 00:27:30.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:30.849 05:24:49 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:30.849 05:24:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.849 05:24:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.849 05:24:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.849 05:24:49 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:30.849 05:24:49 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:30.849 05:24:49 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:30.849 05:24:49 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:30.849 05:24:49 -- paths/export.sh@6 -- # export PATH 00:27:30.849 05:24:49 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:27:30.849 05:24:49 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:27:30.849 [2024-07-26 05:24:49.526153] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:30.849 [2024-07-26 05:24:49.526305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89742 ] 00:27:30.849 [2024-07-26 05:24:49.680415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.849 [2024-07-26 05:24:49.829272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.045  Copying: 256/256 [MB] (average 1896 MBps) 00:27:32.045 00:27:32.045 05:24:51 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:32.045 05:24:51 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:32.045 05:24:51 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:27:32.045 05:24:51 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:27:32.045 05:24:51 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:27:32.045 05:24:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:27:32.045 05:24:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:32.045 05:24:51 -- common/autotest_common.sh@10 -- # set +x 00:27:32.045 ************************************ 00:27:32.045 START TEST dd_inflate_file 00:27:32.045 ************************************ 00:27:32.045 05:24:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:27:32.303 [2024-07-26 05:24:51.166394] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:32.303 [2024-07-26 05:24:51.166550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89758 ] 00:27:32.303 [2024-07-26 05:24:51.334985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.562 [2024-07-26 05:24:51.491532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.758  Copying: 64/64 [MB] (average 1523 MBps) 00:27:33.758 00:27:33.758 00:27:33.758 real 0m1.562s 00:27:33.758 user 0m1.226s 00:27:33.758 sys 0m0.220s 00:27:33.758 05:24:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.758 05:24:52 -- common/autotest_common.sh@10 -- # set +x 00:27:33.758 ************************************ 00:27:33.758 END TEST dd_inflate_file 00:27:33.758 ************************************ 00:27:33.758 05:24:52 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:27:33.758 05:24:52 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:27:33.758 05:24:52 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:27:33.758 05:24:52 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:27:33.758 05:24:52 -- dd/common.sh@31 -- # xtrace_disable 00:27:33.758 05:24:52 -- common/autotest_common.sh@10 -- # set +x 00:27:33.758 05:24:52 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:33.758 05:24:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.758 05:24:52 -- common/autotest_common.sh@10 -- # set +x 00:27:33.758 ************************************ 00:27:33.758 START TEST dd_copy_to_out_bdev 00:27:33.758 ************************************ 00:27:33.758 05:24:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:27:33.758 { 00:27:33.758 "subsystems": [ 00:27:33.758 { 00:27:33.758 "subsystem": "bdev", 00:27:33.758 "config": [ 00:27:33.758 { 00:27:33.758 "params": { 00:27:33.758 "block_size": 4096, 00:27:33.758 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:33.758 "name": "aio1" 00:27:33.758 }, 00:27:33.758 "method": "bdev_aio_create" 00:27:33.758 }, 00:27:33.758 { 00:27:33.758 "params": { 00:27:33.758 "trtype": "pcie", 00:27:33.758 "traddr": "0000:00:06.0", 00:27:33.758 "name": "Nvme0" 00:27:33.758 }, 00:27:33.758 "method": "bdev_nvme_attach_controller" 00:27:33.758 }, 00:27:33.758 { 00:27:33.758 "method": "bdev_wait_for_examine" 00:27:33.758 } 00:27:33.758 ] 00:27:33.758 } 00:27:33.758 ] 00:27:33.758 } 00:27:33.758 [2024-07-26 05:24:52.789230] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:33.758 [2024-07-26 05:24:52.789394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89806 ] 00:27:34.017 [2024-07-26 05:24:52.955455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.017 [2024-07-26 05:24:53.103797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.898  Copying: 40/64 [MB] (40 MBps) Copying: 64/64 [MB] (average 41 MBps) 00:27:36.898 00:27:36.898 00:27:36.898 real 0m3.122s 00:27:36.899 user 0m2.739s 00:27:36.899 sys 0m0.259s 00:27:36.899 05:24:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.899 ************************************ 00:27:36.899 END TEST dd_copy_to_out_bdev 00:27:36.899 ************************************ 00:27:36.899 05:24:55 -- common/autotest_common.sh@10 -- # set +x 00:27:36.899 05:24:55 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:27:36.899 05:24:55 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:27:36.899 05:24:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:36.899 05:24:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:36.899 05:24:55 -- common/autotest_common.sh@10 -- # set +x 00:27:36.899 ************************************ 00:27:36.899 START TEST dd_offset_magic 00:27:36.899 ************************************ 00:27:36.899 05:24:55 -- common/autotest_common.sh@1104 -- # offset_magic 00:27:36.899 05:24:55 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:27:36.899 05:24:55 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:27:36.899 05:24:55 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:27:36.899 05:24:55 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:27:36.899 05:24:55 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:27:36.899 05:24:55 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:27:36.899 05:24:55 -- dd/common.sh@31 -- # xtrace_disable 00:27:36.899 05:24:55 -- common/autotest_common.sh@10 -- # set +x 00:27:36.899 { 00:27:36.899 "subsystems": [ 00:27:36.899 { 00:27:36.899 "subsystem": "bdev", 00:27:36.899 "config": [ 00:27:36.899 { 00:27:36.899 "params": { 00:27:36.899 "block_size": 4096, 00:27:36.899 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:36.899 "name": "aio1" 00:27:36.899 }, 00:27:36.899 "method": "bdev_aio_create" 00:27:36.899 }, 00:27:36.899 { 00:27:36.899 "params": { 00:27:36.899 "trtype": "pcie", 00:27:36.899 "traddr": "0000:00:06.0", 00:27:36.899 "name": "Nvme0" 00:27:36.899 }, 00:27:36.899 "method": "bdev_nvme_attach_controller" 00:27:36.899 }, 00:27:36.899 { 00:27:36.899 "method": "bdev_wait_for_examine" 00:27:36.899 } 00:27:36.899 ] 00:27:36.899 } 00:27:36.899 ] 00:27:36.899 } 00:27:36.899 [2024-07-26 05:24:55.951531] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:36.899 [2024-07-26 05:24:55.951669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89858 ] 00:27:37.158 [2024-07-26 05:24:56.099316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.158 [2024-07-26 05:24:56.246389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.035  Copying: 65/65 [MB] (average 165 MBps) 00:27:39.035 00:27:39.035 05:24:57 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:27:39.035 05:24:57 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:27:39.035 05:24:57 -- dd/common.sh@31 -- # xtrace_disable 00:27:39.035 05:24:57 -- common/autotest_common.sh@10 -- # set +x 00:27:39.035 { 00:27:39.035 "subsystems": [ 00:27:39.035 { 00:27:39.035 "subsystem": "bdev", 00:27:39.035 "config": [ 00:27:39.035 { 00:27:39.035 "params": { 00:27:39.035 "block_size": 4096, 00:27:39.035 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:39.035 "name": "aio1" 00:27:39.035 }, 00:27:39.035 "method": "bdev_aio_create" 00:27:39.035 }, 00:27:39.035 { 00:27:39.035 "params": { 00:27:39.035 "trtype": "pcie", 00:27:39.035 "traddr": "0000:00:06.0", 00:27:39.035 "name": "Nvme0" 00:27:39.035 }, 00:27:39.035 "method": "bdev_nvme_attach_controller" 00:27:39.035 }, 00:27:39.035 { 00:27:39.035 "method": "bdev_wait_for_examine" 00:27:39.035 } 00:27:39.035 ] 00:27:39.035 } 00:27:39.035 ] 00:27:39.035 } 00:27:39.035 [2024-07-26 05:24:57.910607] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:39.035 [2024-07-26 05:24:57.910768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89885 ] 00:27:39.035 [2024-07-26 05:24:58.080287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.294 [2024-07-26 05:24:58.232277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.490  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:40.490 00:27:40.490 05:24:59 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:27:40.490 05:24:59 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:27:40.490 05:24:59 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:27:40.490 05:24:59 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:27:40.490 05:24:59 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:27:40.490 05:24:59 -- dd/common.sh@31 -- # xtrace_disable 00:27:40.490 05:24:59 -- common/autotest_common.sh@10 -- # set +x 00:27:40.490 { 00:27:40.490 "subsystems": [ 00:27:40.490 { 00:27:40.490 "subsystem": "bdev", 00:27:40.490 "config": [ 00:27:40.490 { 00:27:40.490 "params": { 00:27:40.490 "block_size": 4096, 00:27:40.490 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:40.490 "name": "aio1" 00:27:40.490 }, 00:27:40.490 "method": "bdev_aio_create" 00:27:40.490 }, 00:27:40.490 { 00:27:40.490 "params": { 00:27:40.490 "trtype": "pcie", 00:27:40.490 "traddr": "0000:00:06.0", 00:27:40.490 "name": "Nvme0" 00:27:40.490 }, 00:27:40.490 "method": "bdev_nvme_attach_controller" 00:27:40.490 }, 00:27:40.490 { 00:27:40.490 "method": "bdev_wait_for_examine" 00:27:40.490 } 00:27:40.490 ] 00:27:40.490 } 00:27:40.490 ] 00:27:40.490 } 00:27:40.490 [2024-07-26 05:24:59.457545] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:40.490 [2024-07-26 05:24:59.457692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89910 ] 00:27:40.749 [2024-07-26 05:24:59.626419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.749 [2024-07-26 05:24:59.776611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.255  Copying: 65/65 [MB] (average 1140 MBps) 00:27:42.255 00:27:42.255 05:25:01 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:27:42.255 05:25:01 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:27:42.255 05:25:01 -- dd/common.sh@31 -- # xtrace_disable 00:27:42.255 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:27:42.255 { 00:27:42.255 "subsystems": [ 00:27:42.255 { 00:27:42.255 "subsystem": "bdev", 00:27:42.255 "config": [ 00:27:42.255 { 00:27:42.255 "params": { 00:27:42.255 "block_size": 4096, 00:27:42.255 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:42.255 "name": "aio1" 00:27:42.255 }, 00:27:42.255 "method": "bdev_aio_create" 00:27:42.255 }, 00:27:42.255 { 00:27:42.255 "params": { 00:27:42.255 "trtype": "pcie", 00:27:42.255 "traddr": "0000:00:06.0", 00:27:42.255 "name": "Nvme0" 00:27:42.255 }, 00:27:42.255 "method": "bdev_nvme_attach_controller" 00:27:42.255 }, 00:27:42.255 { 00:27:42.255 "method": "bdev_wait_for_examine" 00:27:42.255 } 00:27:42.255 ] 00:27:42.255 } 00:27:42.255 ] 00:27:42.255 } 00:27:42.255 [2024-07-26 05:25:01.173228] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:42.255 [2024-07-26 05:25:01.173356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89938 ] 00:27:42.255 [2024-07-26 05:25:01.325081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.514 [2024-07-26 05:25:01.472895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.740  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:43.740 00:27:43.740 05:25:02 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:27:43.740 05:25:02 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:27:43.740 00:27:43.740 real 0m6.753s 00:27:43.740 user 0m5.078s 00:27:43.740 sys 0m0.897s 00:27:43.740 05:25:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.740 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:27:43.740 ************************************ 00:27:43.740 END TEST dd_offset_magic 00:27:43.740 ************************************ 00:27:43.740 05:25:02 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:27:43.740 05:25:02 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:27:43.740 05:25:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:43.740 05:25:02 -- dd/common.sh@11 -- # local nvme_ref= 00:27:43.740 05:25:02 -- dd/common.sh@12 -- # local size=4194330 00:27:43.740 05:25:02 -- dd/common.sh@14 -- # local bs=1048576 00:27:43.740 05:25:02 -- dd/common.sh@15 -- # local count=5 00:27:43.740 05:25:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:27:43.740 05:25:02 -- dd/common.sh@18 -- # gen_conf 00:27:43.740 05:25:02 -- dd/common.sh@31 -- # xtrace_disable 00:27:43.740 05:25:02 -- common/autotest_common.sh@10 -- # set +x 00:27:43.740 { 00:27:43.740 "subsystems": [ 00:27:43.740 { 00:27:43.740 "subsystem": "bdev", 00:27:43.740 "config": [ 00:27:43.740 { 00:27:43.740 "params": { 00:27:43.740 "block_size": 4096, 00:27:43.740 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:43.740 "name": "aio1" 00:27:43.740 }, 00:27:43.740 "method": "bdev_aio_create" 00:27:43.740 }, 00:27:43.740 { 00:27:43.740 "params": { 00:27:43.740 "trtype": "pcie", 00:27:43.740 "traddr": "0000:00:06.0", 00:27:43.740 "name": "Nvme0" 00:27:43.740 }, 00:27:43.740 "method": "bdev_nvme_attach_controller" 00:27:43.740 }, 00:27:43.740 { 00:27:43.740 "method": "bdev_wait_for_examine" 00:27:43.740 } 00:27:43.740 ] 00:27:43.740 } 00:27:43.740 ] 00:27:43.740 } 00:27:43.740 [2024-07-26 05:25:02.760818] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:43.740 [2024-07-26 05:25:02.760973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89982 ] 00:27:43.997 [2024-07-26 05:25:02.927368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.997 [2024-07-26 05:25:03.090304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.500  Copying: 5120/5120 [kB] (average 1250 MBps) 00:27:45.500 00:27:45.500 05:25:04 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:27:45.500 05:25:04 -- dd/common.sh@10 -- # local bdev=aio1 00:27:45.500 05:25:04 -- dd/common.sh@11 -- # local nvme_ref= 00:27:45.500 05:25:04 -- dd/common.sh@12 -- # local size=4194330 00:27:45.500 05:25:04 -- dd/common.sh@14 -- # local bs=1048576 00:27:45.500 05:25:04 -- dd/common.sh@15 -- # local count=5 00:27:45.500 05:25:04 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:27:45.500 05:25:04 -- dd/common.sh@18 -- # gen_conf 00:27:45.500 05:25:04 -- dd/common.sh@31 -- # xtrace_disable 00:27:45.500 05:25:04 -- common/autotest_common.sh@10 -- # set +x 00:27:45.500 { 00:27:45.500 "subsystems": [ 00:27:45.500 { 00:27:45.500 "subsystem": "bdev", 00:27:45.500 "config": [ 00:27:45.500 { 00:27:45.500 "params": { 00:27:45.500 "block_size": 4096, 00:27:45.500 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:45.500 "name": "aio1" 00:27:45.500 }, 00:27:45.500 "method": "bdev_aio_create" 00:27:45.500 }, 00:27:45.500 { 00:27:45.500 "params": { 00:27:45.500 "trtype": "pcie", 00:27:45.500 "traddr": "0000:00:06.0", 00:27:45.500 "name": "Nvme0" 00:27:45.500 }, 00:27:45.500 "method": "bdev_nvme_attach_controller" 00:27:45.500 }, 00:27:45.500 { 00:27:45.500 "method": "bdev_wait_for_examine" 00:27:45.500 } 00:27:45.500 ] 00:27:45.500 } 00:27:45.500 ] 00:27:45.500 } 00:27:45.500 [2024-07-26 05:25:04.388454] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:45.500 [2024-07-26 05:25:04.388610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90007 ] 00:27:45.500 [2024-07-26 05:25:04.552864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.758 [2024-07-26 05:25:04.705837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.954  Copying: 5120/5120 [kB] (average 1666 MBps) 00:27:46.954 00:27:46.954 05:25:05 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:27:46.954 00:27:46.954 real 0m16.532s 00:27:46.954 user 0m12.893s 00:27:46.954 sys 0m2.279s 00:27:46.954 05:25:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.954 05:25:05 -- common/autotest_common.sh@10 -- # set +x 00:27:46.954 ************************************ 00:27:46.954 END TEST spdk_dd_bdev_to_bdev 00:27:46.954 ************************************ 00:27:46.954 05:25:05 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:27:46.954 05:25:05 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:46.954 05:25:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:46.954 05:25:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:46.954 05:25:05 -- common/autotest_common.sh@10 -- # set +x 00:27:46.954 ************************************ 00:27:46.954 START TEST spdk_dd_sparse 00:27:46.954 ************************************ 00:27:46.954 05:25:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:46.954 * Looking for test storage... 00:27:46.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:46.954 05:25:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:46.954 05:25:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.954 05:25:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.954 05:25:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.954 05:25:06 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:46.954 05:25:06 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:46.954 05:25:06 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:46.954 05:25:06 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:46.954 05:25:06 -- paths/export.sh@6 -- # export PATH 00:27:46.954 05:25:06 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:47.212 05:25:06 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:27:47.212 05:25:06 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:27:47.212 05:25:06 -- dd/sparse.sh@110 -- # file1=file_zero1 00:27:47.212 05:25:06 -- dd/sparse.sh@111 -- # file2=file_zero2 00:27:47.212 05:25:06 -- dd/sparse.sh@112 -- # file3=file_zero3 00:27:47.212 05:25:06 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:27:47.212 05:25:06 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:27:47.212 05:25:06 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:27:47.212 05:25:06 -- dd/sparse.sh@118 -- # prepare 00:27:47.212 05:25:06 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:27:47.212 05:25:06 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:27:47.212 1+0 records in 00:27:47.212 1+0 records out 00:27:47.212 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00672904 s, 623 MB/s 00:27:47.212 05:25:06 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:27:47.212 1+0 records in 00:27:47.212 1+0 records out 00:27:47.212 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00809553 s, 518 MB/s 00:27:47.212 05:25:06 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:27:47.212 1+0 records in 00:27:47.212 1+0 records out 00:27:47.212 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.003914 s, 1.1 GB/s 00:27:47.212 05:25:06 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:27:47.212 05:25:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:47.212 05:25:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:47.212 05:25:06 -- common/autotest_common.sh@10 -- # set +x 00:27:47.212 ************************************ 00:27:47.212 START TEST dd_sparse_file_to_file 00:27:47.212 ************************************ 00:27:47.212 05:25:06 -- common/autotest_common.sh@1104 -- # file_to_file 00:27:47.212 05:25:06 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:27:47.212 05:25:06 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:27:47.212 05:25:06 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:47.212 05:25:06 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:27:47.212 05:25:06 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:27:47.212 05:25:06 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:27:47.212 05:25:06 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:27:47.212 05:25:06 -- dd/sparse.sh@41 -- # gen_conf 00:27:47.212 05:25:06 -- dd/common.sh@31 -- # xtrace_disable 00:27:47.212 05:25:06 -- common/autotest_common.sh@10 -- # set +x 00:27:47.212 { 00:27:47.212 "subsystems": [ 00:27:47.212 { 00:27:47.212 "subsystem": "bdev", 00:27:47.212 "config": [ 00:27:47.212 { 00:27:47.212 "params": { 00:27:47.212 "block_size": 4096, 00:27:47.213 "filename": "dd_sparse_aio_disk", 00:27:47.213 "name": "dd_aio" 00:27:47.213 }, 00:27:47.213 "method": "bdev_aio_create" 00:27:47.213 }, 00:27:47.213 { 00:27:47.213 "params": { 00:27:47.213 "lvs_name": "dd_lvstore", 00:27:47.213 "bdev_name": "dd_aio" 00:27:47.213 }, 00:27:47.213 "method": "bdev_lvol_create_lvstore" 00:27:47.213 }, 00:27:47.213 { 00:27:47.213 "method": "bdev_wait_for_examine" 00:27:47.213 } 00:27:47.213 ] 00:27:47.213 } 00:27:47.213 ] 00:27:47.213 } 00:27:47.213 [2024-07-26 05:25:06.166586] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:47.213 [2024-07-26 05:25:06.166737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90082 ] 00:27:47.471 [2024-07-26 05:25:06.336291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.471 [2024-07-26 05:25:06.482424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.666  Copying: 12/36 [MB] (average 1333 MBps) 00:27:48.666 00:27:48.666 05:25:07 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:27:48.666 05:25:07 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:27:48.666 05:25:07 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:27:48.666 05:25:07 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:27:48.666 05:25:07 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:48.666 05:25:07 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:27:48.666 05:25:07 -- dd/sparse.sh@52 -- # stat1_b=24576 00:27:48.666 05:25:07 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:27:48.666 05:25:07 -- dd/sparse.sh@53 -- # stat2_b=24576 00:27:48.666 05:25:07 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:48.666 00:27:48.666 real 0m1.660s 00:27:48.666 user 0m1.303s 00:27:48.666 sys 0m0.228s 00:27:48.666 05:25:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.666 05:25:07 -- common/autotest_common.sh@10 -- # set +x 00:27:48.666 ************************************ 00:27:48.666 END TEST dd_sparse_file_to_file 00:27:48.666 ************************************ 00:27:48.926 05:25:07 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:27:48.926 05:25:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:48.926 05:25:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:48.926 05:25:07 -- common/autotest_common.sh@10 -- # set +x 00:27:48.926 ************************************ 00:27:48.926 START TEST dd_sparse_file_to_bdev 00:27:48.926 ************************************ 00:27:48.926 05:25:07 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:27:48.926 05:25:07 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:48.926 05:25:07 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:27:48.926 05:25:07 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:27:48.926 05:25:07 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:27:48.926 05:25:07 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:27:48.926 05:25:07 -- dd/sparse.sh@73 -- # gen_conf 00:27:48.926 05:25:07 -- dd/common.sh@31 -- # xtrace_disable 00:27:48.926 05:25:07 -- common/autotest_common.sh@10 -- # set +x 00:27:48.926 { 00:27:48.926 "subsystems": [ 00:27:48.926 { 00:27:48.926 "subsystem": "bdev", 00:27:48.926 "config": [ 00:27:48.926 { 00:27:48.926 "params": { 00:27:48.926 "block_size": 4096, 00:27:48.926 "filename": "dd_sparse_aio_disk", 00:27:48.926 "name": "dd_aio" 00:27:48.926 }, 00:27:48.926 "method": "bdev_aio_create" 00:27:48.926 }, 00:27:48.926 { 00:27:48.926 "params": { 00:27:48.926 "lvs_name": "dd_lvstore", 00:27:48.926 "lvol_name": "dd_lvol", 00:27:48.926 "size": 37748736, 00:27:48.926 "thin_provision": true 00:27:48.926 }, 00:27:48.926 "method": "bdev_lvol_create" 00:27:48.926 }, 00:27:48.926 { 00:27:48.926 "method": "bdev_wait_for_examine" 00:27:48.926 } 00:27:48.926 ] 00:27:48.926 } 00:27:48.926 ] 00:27:48.926 } 00:27:48.926 [2024-07-26 05:25:07.873145] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:48.926 [2024-07-26 05:25:07.873303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90134 ] 00:27:49.185 [2024-07-26 05:25:08.042699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.185 [2024-07-26 05:25:08.194895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.444 [2024-07-26 05:25:08.415798] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:27:49.444  Copying: 12/36 [MB] (average 521 MBps)[2024-07-26 05:25:08.468142] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:27:50.381 00:27:50.381 00:27:50.381 00:27:50.381 real 0m1.642s 00:27:50.381 user 0m1.319s 00:27:50.381 sys 0m0.216s 00:27:50.381 05:25:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.381 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:27:50.381 ************************************ 00:27:50.381 END TEST dd_sparse_file_to_bdev 00:27:50.381 ************************************ 00:27:50.639 05:25:09 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:27:50.639 05:25:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:50.639 05:25:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:50.639 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:27:50.639 ************************************ 00:27:50.639 START TEST dd_sparse_bdev_to_file 00:27:50.639 ************************************ 00:27:50.639 05:25:09 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:27:50.639 05:25:09 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:27:50.639 05:25:09 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:27:50.639 05:25:09 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:50.639 05:25:09 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:27:50.639 05:25:09 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:27:50.639 05:25:09 -- dd/sparse.sh@91 -- # gen_conf 00:27:50.639 05:25:09 -- dd/common.sh@31 -- # xtrace_disable 00:27:50.639 05:25:09 -- common/autotest_common.sh@10 -- # set +x 00:27:50.639 { 00:27:50.639 "subsystems": [ 00:27:50.639 { 00:27:50.639 "subsystem": "bdev", 00:27:50.639 "config": [ 00:27:50.639 { 00:27:50.639 "params": { 00:27:50.639 "block_size": 4096, 00:27:50.639 "filename": "dd_sparse_aio_disk", 00:27:50.639 "name": "dd_aio" 00:27:50.639 }, 00:27:50.639 "method": "bdev_aio_create" 00:27:50.639 }, 00:27:50.639 { 00:27:50.639 "method": "bdev_wait_for_examine" 00:27:50.639 } 00:27:50.639 ] 00:27:50.639 } 00:27:50.639 ] 00:27:50.639 } 00:27:50.639 [2024-07-26 05:25:09.564086] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:50.639 [2024-07-26 05:25:09.564240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90177 ] 00:27:50.639 [2024-07-26 05:25:09.733071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.898 [2024-07-26 05:25:09.888992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.094  Copying: 12/36 [MB] (average 1500 MBps) 00:27:52.094 00:27:52.094 05:25:11 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:27:52.094 05:25:11 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:27:52.094 05:25:11 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:27:52.094 05:25:11 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:27:52.094 05:25:11 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:52.094 05:25:11 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:27:52.094 05:25:11 -- dd/sparse.sh@102 -- # stat2_b=24576 00:27:52.094 05:25:11 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:27:52.094 05:25:11 -- dd/sparse.sh@103 -- # stat3_b=24576 00:27:52.094 05:25:11 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:52.094 00:27:52.094 real 0m1.651s 00:27:52.094 user 0m1.295s 00:27:52.094 sys 0m0.245s 00:27:52.094 05:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.094 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:52.094 ************************************ 00:27:52.094 END TEST dd_sparse_bdev_to_file 00:27:52.094 ************************************ 00:27:52.094 05:25:11 -- dd/sparse.sh@1 -- # cleanup 00:27:52.094 05:25:11 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:27:52.095 05:25:11 -- dd/sparse.sh@12 -- # rm file_zero1 00:27:52.354 05:25:11 -- dd/sparse.sh@13 -- # rm file_zero2 00:27:52.354 05:25:11 -- dd/sparse.sh@14 -- # rm file_zero3 00:27:52.354 ************************************ 00:27:52.354 END TEST spdk_dd_sparse 00:27:52.354 ************************************ 00:27:52.354 00:27:52.354 real 0m5.240s 00:27:52.354 user 0m4.006s 00:27:52.354 sys 0m0.884s 00:27:52.354 05:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.354 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:52.354 05:25:11 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:52.354 05:25:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:52.354 05:25:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:52.354 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:52.354 ************************************ 00:27:52.354 START TEST spdk_dd_negative 00:27:52.354 ************************************ 00:27:52.354 05:25:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:52.354 * Looking for test storage... 00:27:52.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:52.354 05:25:11 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:52.354 05:25:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.354 05:25:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.354 05:25:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.354 05:25:11 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:52.354 05:25:11 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:52.354 05:25:11 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:52.354 05:25:11 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:52.354 05:25:11 -- paths/export.sh@6 -- # export PATH 00:27:52.354 05:25:11 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:52.354 05:25:11 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:52.354 05:25:11 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:52.354 05:25:11 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:52.354 05:25:11 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:52.354 05:25:11 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:27:52.354 05:25:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:52.354 05:25:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:52.354 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:52.354 ************************************ 00:27:52.354 START TEST dd_invalid_arguments 00:27:52.354 ************************************ 00:27:52.354 05:25:11 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:27:52.354 05:25:11 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:52.354 05:25:11 -- common/autotest_common.sh@640 -- # local es=0 00:27:52.354 05:25:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:52.354 05:25:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.354 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.354 05:25:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.354 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.354 05:25:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.354 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.354 05:25:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.354 05:25:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:52.354 05:25:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:52.354 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:27:52.354 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:27:52.354 options: 00:27:52.354 -c, --config JSON config file (default none) 00:27:52.354 --json JSON config file (default none) 00:27:52.354 --json-ignore-init-errors 00:27:52.354 don't exit on invalid config entry 00:27:52.354 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:27:52.354 -g, --single-file-segments 00:27:52.354 force creating just one hugetlbfs file 00:27:52.354 -h, --help show this usage 00:27:52.354 -i, --shm-id shared memory ID (optional) 00:27:52.354 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:27:52.354 --lcores lcore to CPU mapping list. The list is in the format: 00:27:52.354 [<,lcores[@CPUs]>...] 00:27:52.354 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:27:52.354 Within the group, '-' is used for range separator, 00:27:52.354 ',' is used for single number separator. 00:27:52.354 '( )' can be omitted for single element group, 00:27:52.355 '@' can be omitted if cpus and lcores have the same value 00:27:52.355 -n, --mem-channels channel number of memory channels used for DPDK 00:27:52.355 -p, --main-core main (primary) core for DPDK 00:27:52.355 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:27:52.355 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:27:52.355 --disable-cpumask-locks Disable CPU core lock files. 00:27:52.355 --silence-noticelog disable notice level logging to stderr 00:27:52.355 --msg-mempool-size global message memory pool size in count (default: 262143) 00:27:52.355 -u, --no-pci disable PCI access 00:27:52.355 --wait-for-rpc wait for RPCs to initialize subsystems 00:27:52.355 --max-delay maximum reactor delay (in microseconds) 00:27:52.355 -B, --pci-blocked pci addr to block (can be used more than once) 00:27:52.355 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:27:52.355 -R, --huge-unlink unlink huge files after initialization 00:27:52.355 -v, --version print SPDK version 00:27:52.355 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:27:52.355 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:27:52.355 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:27:52.355 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:27:52.355 Tracepoints vary in size and can use more than one trace entry. 00:27:52.355 --rpcs-allowed comma-separated list of permitted RPCS 00:27:52.355 --env-context Opaque context for use of the env implementation 00:27:52.355 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:27:52.355 --no-huge run without using hugepages 00:27:52.355 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:27:52.355 -e, --tpoint-group [:] 00:27:52.355 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:27:52.355 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:27:52.355 Groups and [2024-07-26 05:25:11.427383] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:27:52.614 masks can be combined (e.g. thread,bdev:0x1). 00:27:52.614 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:27:52.614 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:27:52.614 [--------- DD Options ---------] 00:27:52.614 --if Input file. Must specify either --if or --ib. 00:27:52.614 --ib Input bdev. Must specifier either --if or --ib 00:27:52.614 --of Output file. Must specify either --of or --ob. 00:27:52.614 --ob Output bdev. Must specify either --of or --ob. 00:27:52.614 --iflag Input file flags. 00:27:52.614 --oflag Output file flags. 00:27:52.614 --bs I/O unit size (default: 4096) 00:27:52.614 --qd Queue depth (default: 2) 00:27:52.614 --count I/O unit count. The number of I/O units to copy. (default: all) 00:27:52.614 --skip Skip this many I/O units at start of input. (default: 0) 00:27:52.614 --seek Skip this many I/O units at start of output. (default: 0) 00:27:52.614 --aio Force usage of AIO. (by default io_uring is used if available) 00:27:52.614 --sparse Enable hole skipping in input target 00:27:52.614 Available iflag and oflag values: 00:27:52.614 append - append mode 00:27:52.614 direct - use direct I/O for data 00:27:52.614 directory - fail unless a directory 00:27:52.614 dsync - use synchronized I/O for data 00:27:52.614 noatime - do not update access time 00:27:52.614 noctty - do not assign controlling terminal from file 00:27:52.614 nofollow - do not follow symlinks 00:27:52.614 nonblock - use non-blocking I/O 00:27:52.614 sync - use synchronized I/O for data and metadata 00:27:52.614 05:25:11 -- common/autotest_common.sh@643 -- # es=2 00:27:52.614 05:25:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:52.614 05:25:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:52.614 05:25:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:52.614 00:27:52.614 real 0m0.117s 00:27:52.614 user 0m0.067s 00:27:52.614 sys 0m0.050s 00:27:52.614 05:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.614 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:52.614 ************************************ 00:27:52.614 END TEST dd_invalid_arguments 00:27:52.614 ************************************ 00:27:52.614 05:25:11 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:27:52.614 05:25:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:52.614 05:25:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:52.614 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:52.614 ************************************ 00:27:52.614 START TEST dd_double_input 00:27:52.614 ************************************ 00:27:52.614 05:25:11 -- common/autotest_common.sh@1104 -- # double_input 00:27:52.614 05:25:11 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:52.614 05:25:11 -- common/autotest_common.sh@640 -- # local es=0 00:27:52.614 05:25:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:52.614 05:25:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.614 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.614 05:25:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.614 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.614 05:25:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.614 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.614 05:25:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.614 05:25:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:52.614 05:25:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:52.614 [2024-07-26 05:25:11.589209] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:27:52.614 05:25:11 -- common/autotest_common.sh@643 -- # es=22 00:27:52.614 05:25:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:52.614 05:25:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:52.615 05:25:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:52.615 00:27:52.615 real 0m0.112s 00:27:52.615 user 0m0.063s 00:27:52.615 sys 0m0.050s 00:27:52.615 05:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.615 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:52.615 ************************************ 00:27:52.615 END TEST dd_double_input 00:27:52.615 ************************************ 00:27:52.615 05:25:11 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:27:52.615 05:25:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:52.615 05:25:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:52.615 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:52.615 ************************************ 00:27:52.615 START TEST dd_double_output 00:27:52.615 ************************************ 00:27:52.615 05:25:11 -- common/autotest_common.sh@1104 -- # double_output 00:27:52.615 05:25:11 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:52.615 05:25:11 -- common/autotest_common.sh@640 -- # local es=0 00:27:52.615 05:25:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:52.615 05:25:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.615 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.615 05:25:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.615 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.615 05:25:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.615 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.615 05:25:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.615 05:25:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:52.615 05:25:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:52.874 [2024-07-26 05:25:11.748450] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:27:52.874 05:25:11 -- common/autotest_common.sh@643 -- # es=22 00:27:52.874 05:25:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:52.874 05:25:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:52.874 05:25:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:52.874 00:27:52.874 real 0m0.114s 00:27:52.874 user 0m0.066s 00:27:52.874 sys 0m0.049s 00:27:52.874 05:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.874 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:52.874 ************************************ 00:27:52.874 END TEST dd_double_output 00:27:52.874 ************************************ 00:27:52.874 05:25:11 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:27:52.874 05:25:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:52.874 05:25:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:52.874 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:52.874 ************************************ 00:27:52.874 START TEST dd_no_input 00:27:52.874 ************************************ 00:27:52.874 05:25:11 -- common/autotest_common.sh@1104 -- # no_input 00:27:52.874 05:25:11 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:52.874 05:25:11 -- common/autotest_common.sh@640 -- # local es=0 00:27:52.874 05:25:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:52.874 05:25:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.874 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.874 05:25:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.874 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.874 05:25:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.874 05:25:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.874 05:25:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.874 05:25:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:52.874 05:25:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:52.874 [2024-07-26 05:25:11.907463] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:27:52.874 05:25:11 -- common/autotest_common.sh@643 -- # es=22 00:27:52.874 05:25:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:52.874 05:25:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:52.874 05:25:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:52.874 00:27:52.874 real 0m0.113s 00:27:52.874 user 0m0.070s 00:27:52.874 sys 0m0.043s 00:27:52.874 05:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.874 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:52.874 ************************************ 00:27:52.874 END TEST dd_no_input 00:27:52.874 ************************************ 00:27:53.134 05:25:11 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:27:53.134 05:25:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:53.134 05:25:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:53.134 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:53.134 ************************************ 00:27:53.134 START TEST dd_no_output 00:27:53.134 ************************************ 00:27:53.134 05:25:12 -- common/autotest_common.sh@1104 -- # no_output 00:27:53.134 05:25:12 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:53.134 05:25:12 -- common/autotest_common.sh@640 -- # local es=0 00:27:53.134 05:25:12 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:53.134 05:25:12 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.134 05:25:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.134 05:25:12 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.134 05:25:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.134 05:25:12 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.134 05:25:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.134 05:25:12 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.134 05:25:12 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:53.134 05:25:12 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:53.134 [2024-07-26 05:25:12.070131] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:27:53.134 05:25:12 -- common/autotest_common.sh@643 -- # es=22 00:27:53.134 05:25:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:53.134 05:25:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:53.134 05:25:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:53.134 00:27:53.134 real 0m0.116s 00:27:53.134 user 0m0.068s 00:27:53.134 sys 0m0.049s 00:27:53.134 05:25:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.134 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:27:53.134 ************************************ 00:27:53.134 END TEST dd_no_output 00:27:53.134 ************************************ 00:27:53.134 05:25:12 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:27:53.134 05:25:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:53.134 05:25:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:53.134 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:27:53.134 ************************************ 00:27:53.134 START TEST dd_wrong_blocksize 00:27:53.134 ************************************ 00:27:53.134 05:25:12 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:27:53.134 05:25:12 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:53.134 05:25:12 -- common/autotest_common.sh@640 -- # local es=0 00:27:53.134 05:25:12 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:53.134 05:25:12 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.134 05:25:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.134 05:25:12 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.134 05:25:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.134 05:25:12 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.134 05:25:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.134 05:25:12 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.134 05:25:12 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:53.134 05:25:12 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:53.134 [2024-07-26 05:25:12.227824] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:27:53.394 05:25:12 -- common/autotest_common.sh@643 -- # es=22 00:27:53.394 05:25:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:53.394 05:25:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:53.394 05:25:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:53.394 00:27:53.394 real 0m0.093s 00:27:53.394 user 0m0.047s 00:27:53.394 sys 0m0.046s 00:27:53.394 05:25:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.394 ************************************ 00:27:53.394 END TEST dd_wrong_blocksize 00:27:53.394 ************************************ 00:27:53.394 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:27:53.394 05:25:12 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:27:53.394 05:25:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:53.394 05:25:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:53.394 05:25:12 -- common/autotest_common.sh@10 -- # set +x 00:27:53.394 ************************************ 00:27:53.394 START TEST dd_smaller_blocksize 00:27:53.394 ************************************ 00:27:53.394 05:25:12 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:27:53.394 05:25:12 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:53.394 05:25:12 -- common/autotest_common.sh@640 -- # local es=0 00:27:53.394 05:25:12 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:53.394 05:25:12 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.394 05:25:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.394 05:25:12 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.394 05:25:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.394 05:25:12 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.394 05:25:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.394 05:25:12 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.394 05:25:12 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:53.394 05:25:12 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:53.394 [2024-07-26 05:25:12.373042] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:53.394 [2024-07-26 05:25:12.373182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90411 ] 00:27:53.653 [2024-07-26 05:25:12.525872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.653 [2024-07-26 05:25:12.674656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.221 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:27:54.221 [2024-07-26 05:25:13.139249] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:27:54.221 [2024-07-26 05:25:13.139306] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:54.792 [2024-07-26 05:25:13.689447] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:55.052 05:25:14 -- common/autotest_common.sh@643 -- # es=244 00:27:55.052 05:25:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:55.052 05:25:14 -- common/autotest_common.sh@652 -- # es=116 00:27:55.052 05:25:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:55.052 05:25:14 -- common/autotest_common.sh@660 -- # es=1 00:27:55.052 05:25:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:55.052 00:27:55.052 real 0m1.708s 00:27:55.052 user 0m1.247s 00:27:55.052 sys 0m0.360s 00:27:55.052 05:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.052 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:27:55.052 ************************************ 00:27:55.052 END TEST dd_smaller_blocksize 00:27:55.052 ************************************ 00:27:55.052 05:25:14 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:27:55.052 05:25:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:55.052 05:25:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.052 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:27:55.052 ************************************ 00:27:55.052 START TEST dd_invalid_count 00:27:55.052 ************************************ 00:27:55.052 05:25:14 -- common/autotest_common.sh@1104 -- # invalid_count 00:27:55.052 05:25:14 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:55.052 05:25:14 -- common/autotest_common.sh@640 -- # local es=0 00:27:55.052 05:25:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:55.052 05:25:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.052 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.052 05:25:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.052 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.052 05:25:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.052 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.052 05:25:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.052 05:25:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:55.052 05:25:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:55.052 [2024-07-26 05:25:14.141819] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:27:55.311 05:25:14 -- common/autotest_common.sh@643 -- # es=22 00:27:55.311 05:25:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:55.311 05:25:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:55.311 05:25:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:55.311 00:27:55.311 real 0m0.113s 00:27:55.311 user 0m0.054s 00:27:55.311 sys 0m0.059s 00:27:55.311 05:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.311 ************************************ 00:27:55.311 END TEST dd_invalid_count 00:27:55.311 ************************************ 00:27:55.311 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:27:55.311 05:25:14 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:27:55.311 05:25:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:55.311 05:25:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.311 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:27:55.311 ************************************ 00:27:55.311 START TEST dd_invalid_oflag 00:27:55.311 ************************************ 00:27:55.311 05:25:14 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:27:55.311 05:25:14 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:55.311 05:25:14 -- common/autotest_common.sh@640 -- # local es=0 00:27:55.311 05:25:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:55.311 05:25:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.311 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.311 05:25:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.311 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.311 05:25:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.311 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.311 05:25:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.311 05:25:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:55.311 05:25:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:55.311 [2024-07-26 05:25:14.304108] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:27:55.311 ************************************ 00:27:55.311 END TEST dd_invalid_oflag 00:27:55.311 ************************************ 00:27:55.311 05:25:14 -- common/autotest_common.sh@643 -- # es=22 00:27:55.311 05:25:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:55.311 05:25:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:55.311 05:25:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:55.311 00:27:55.311 real 0m0.112s 00:27:55.311 user 0m0.060s 00:27:55.311 sys 0m0.052s 00:27:55.311 05:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.311 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:27:55.311 05:25:14 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:27:55.311 05:25:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:55.311 05:25:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.311 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:27:55.311 ************************************ 00:27:55.311 START TEST dd_invalid_iflag 00:27:55.311 ************************************ 00:27:55.311 05:25:14 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:27:55.311 05:25:14 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:55.311 05:25:14 -- common/autotest_common.sh@640 -- # local es=0 00:27:55.311 05:25:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:55.311 05:25:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.311 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.311 05:25:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.311 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.311 05:25:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.311 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.311 05:25:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.311 05:25:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:55.311 05:25:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:55.569 [2024-07-26 05:25:14.468598] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:27:55.570 ************************************ 00:27:55.570 END TEST dd_invalid_iflag 00:27:55.570 ************************************ 00:27:55.570 05:25:14 -- common/autotest_common.sh@643 -- # es=22 00:27:55.570 05:25:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:55.570 05:25:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:55.570 05:25:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:55.570 00:27:55.570 real 0m0.113s 00:27:55.570 user 0m0.065s 00:27:55.570 sys 0m0.048s 00:27:55.570 05:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.570 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:27:55.570 05:25:14 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:27:55.570 05:25:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:55.570 05:25:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.570 05:25:14 -- common/autotest_common.sh@10 -- # set +x 00:27:55.570 ************************************ 00:27:55.570 START TEST dd_unknown_flag 00:27:55.570 ************************************ 00:27:55.570 05:25:14 -- common/autotest_common.sh@1104 -- # unknown_flag 00:27:55.570 05:25:14 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:55.570 05:25:14 -- common/autotest_common.sh@640 -- # local es=0 00:27:55.570 05:25:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:55.570 05:25:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.570 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.570 05:25:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.570 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.570 05:25:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.570 05:25:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:55.570 05:25:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.570 05:25:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:55.570 05:25:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:55.570 [2024-07-26 05:25:14.615184] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:55.570 [2024-07-26 05:25:14.615292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90518 ] 00:27:55.828 [2024-07-26 05:25:14.765103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.828 [2024-07-26 05:25:14.918861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.086 [2024-07-26 05:25:15.130928] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:27:56.086 [2024-07-26 05:25:15.130989] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:56.086 [2024-07-26 05:25:15.131016] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:56.086 [2024-07-26 05:25:15.131035] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:56.653 [2024-07-26 05:25:15.678189] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:57.219 05:25:16 -- common/autotest_common.sh@643 -- # es=236 00:27:57.219 05:25:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:57.219 05:25:16 -- common/autotest_common.sh@652 -- # es=108 00:27:57.219 05:25:16 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:57.219 05:25:16 -- common/autotest_common.sh@660 -- # es=1 00:27:57.219 05:25:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:57.219 00:27:57.219 real 0m1.461s 00:27:57.219 user 0m1.196s 00:27:57.219 sys 0m0.164s 00:27:57.219 05:25:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.219 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:27:57.219 ************************************ 00:27:57.219 END TEST dd_unknown_flag 00:27:57.219 ************************************ 00:27:57.219 05:25:16 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:27:57.219 05:25:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:57.219 05:25:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:57.219 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:27:57.219 ************************************ 00:27:57.219 START TEST dd_invalid_json 00:27:57.219 ************************************ 00:27:57.219 05:25:16 -- common/autotest_common.sh@1104 -- # invalid_json 00:27:57.219 05:25:16 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:57.219 05:25:16 -- dd/negative_dd.sh@95 -- # : 00:27:57.219 05:25:16 -- common/autotest_common.sh@640 -- # local es=0 00:27:57.219 05:25:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:57.219 05:25:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:57.219 05:25:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:57.220 05:25:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:57.220 05:25:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:57.220 05:25:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:57.220 05:25:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:57.220 05:25:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:57.220 05:25:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:57.220 05:25:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:57.220 [2024-07-26 05:25:16.140811] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:57.220 [2024-07-26 05:25:16.140965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90558 ] 00:27:57.220 [2024-07-26 05:25:16.309735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.478 [2024-07-26 05:25:16.459056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.478 [2024-07-26 05:25:16.459269] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:27:57.478 [2024-07-26 05:25:16.459306] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:57.478 [2024-07-26 05:25:16.459367] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:57.736 05:25:16 -- common/autotest_common.sh@643 -- # es=234 00:27:57.736 05:25:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:57.736 05:25:16 -- common/autotest_common.sh@652 -- # es=106 00:27:57.736 05:25:16 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:57.736 05:25:16 -- common/autotest_common.sh@660 -- # es=1 00:27:57.736 05:25:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:57.736 ************************************ 00:27:57.736 END TEST dd_invalid_json 00:27:57.736 ************************************ 00:27:57.736 00:27:57.736 real 0m0.711s 00:27:57.736 user 0m0.497s 00:27:57.736 sys 0m0.114s 00:27:57.736 05:25:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.736 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 00:27:57.736 real 0m5.560s 00:27:57.736 user 0m3.697s 00:27:57.736 sys 0m1.536s 00:27:57.737 05:25:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.737 ************************************ 00:27:57.737 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:27:57.737 END TEST spdk_dd_negative 00:27:57.737 ************************************ 00:27:58.001 00:27:58.001 real 2m10.132s 00:27:58.001 user 1m42.205s 00:27:58.001 sys 0m17.934s 00:27:58.001 05:25:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:58.001 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:27:58.001 ************************************ 00:27:58.001 END TEST spdk_dd 00:27:58.001 ************************************ 00:27:58.001 05:25:16 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:27:58.001 05:25:16 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:58.001 05:25:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:58.001 05:25:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:58.001 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:27:58.001 ************************************ 00:27:58.001 START TEST blockdev_nvme 00:27:58.001 ************************************ 00:27:58.001 05:25:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:58.001 * Looking for test storage... 00:27:58.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:58.001 05:25:16 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:58.001 05:25:16 -- bdev/nbd_common.sh@6 -- # set -e 00:27:58.001 05:25:16 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:58.001 05:25:16 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:58.001 05:25:16 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:58.002 05:25:16 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:58.002 05:25:16 -- bdev/blockdev.sh@18 -- # : 00:27:58.002 05:25:16 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:27:58.002 05:25:16 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:27:58.002 05:25:16 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:27:58.002 05:25:16 -- bdev/blockdev.sh@672 -- # uname -s 00:27:58.002 05:25:16 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:27:58.002 05:25:16 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:27:58.002 05:25:16 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:27:58.002 05:25:16 -- bdev/blockdev.sh@681 -- # crypto_device= 00:27:58.002 05:25:16 -- bdev/blockdev.sh@682 -- # dek= 00:27:58.002 05:25:16 -- bdev/blockdev.sh@683 -- # env_ctx= 00:27:58.002 05:25:16 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:27:58.002 05:25:16 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:27:58.002 05:25:16 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:27:58.002 05:25:16 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:27:58.002 05:25:16 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:27:58.002 05:25:17 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=90643 00:27:58.002 05:25:17 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:58.002 05:25:17 -- bdev/blockdev.sh@47 -- # waitforlisten 90643 00:27:58.002 05:25:17 -- common/autotest_common.sh@819 -- # '[' -z 90643 ']' 00:27:58.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.002 05:25:17 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:58.002 05:25:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.002 05:25:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:58.002 05:25:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.002 05:25:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:58.002 05:25:17 -- common/autotest_common.sh@10 -- # set +x 00:27:58.002 [2024-07-26 05:25:17.071199] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:58.002 [2024-07-26 05:25:17.071806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90643 ] 00:27:58.276 [2024-07-26 05:25:17.242497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.549 [2024-07-26 05:25:17.395692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:58.549 [2024-07-26 05:25:17.395927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.117 05:25:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:59.117 05:25:18 -- common/autotest_common.sh@852 -- # return 0 00:27:59.117 05:25:18 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:27:59.117 05:25:18 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:27:59.117 05:25:18 -- bdev/blockdev.sh@79 -- # local json 00:27:59.117 05:25:18 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:27:59.117 05:25:18 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:59.117 05:25:18 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:27:59.117 05:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.117 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:27:59.117 05:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.117 05:25:18 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:59.117 05:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.117 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:27:59.117 05:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.117 05:25:18 -- bdev/blockdev.sh@738 -- # cat 00:27:59.117 05:25:18 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:59.117 05:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.117 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:27:59.117 05:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.117 05:25:18 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:59.117 05:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.117 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:27:59.117 05:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.117 05:25:18 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:59.117 05:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.117 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:27:59.117 05:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.117 05:25:18 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:59.117 05:25:18 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:59.117 05:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.117 05:25:18 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:59.117 05:25:18 -- common/autotest_common.sh@10 -- # set +x 00:27:59.376 05:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.376 05:25:18 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:59.376 05:25:18 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2106a3a6-9b20-4f69-9cb7-9a4841ade3b7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2106a3a6-9b20-4f69-9cb7-9a4841ade3b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:27:59.376 05:25:18 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:59.376 05:25:18 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:59.376 05:25:18 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:27:59.376 05:25:18 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:59.376 05:25:18 -- bdev/blockdev.sh@752 -- # killprocess 90643 00:27:59.376 05:25:18 -- common/autotest_common.sh@926 -- # '[' -z 90643 ']' 00:27:59.376 05:25:18 -- common/autotest_common.sh@930 -- # kill -0 90643 00:27:59.376 05:25:18 -- common/autotest_common.sh@931 -- # uname 00:27:59.376 05:25:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:59.376 05:25:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90643 00:27:59.376 05:25:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:59.376 05:25:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:59.376 killing process with pid 90643 00:27:59.376 05:25:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90643' 00:27:59.376 05:25:18 -- common/autotest_common.sh@945 -- # kill 90643 00:27:59.376 05:25:18 -- common/autotest_common.sh@950 -- # wait 90643 00:28:01.279 05:25:19 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:01.279 05:25:19 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:01.279 05:25:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:28:01.279 05:25:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:01.279 05:25:19 -- common/autotest_common.sh@10 -- # set +x 00:28:01.280 ************************************ 00:28:01.280 START TEST bdev_hello_world 00:28:01.280 ************************************ 00:28:01.280 05:25:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:01.280 [2024-07-26 05:25:20.017574] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:01.280 [2024-07-26 05:25:20.017729] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90709 ] 00:28:01.280 [2024-07-26 05:25:20.185734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.280 [2024-07-26 05:25:20.334612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.848 [2024-07-26 05:25:20.673624] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:01.848 [2024-07-26 05:25:20.673684] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:28:01.848 [2024-07-26 05:25:20.673722] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:01.848 [2024-07-26 05:25:20.676282] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:01.848 [2024-07-26 05:25:20.676785] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:01.848 [2024-07-26 05:25:20.676848] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:01.848 [2024-07-26 05:25:20.677102] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:01.848 00:28:01.848 [2024-07-26 05:25:20.677137] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:02.785 00:28:02.785 real 0m1.619s 00:28:02.785 user 0m1.309s 00:28:02.785 sys 0m0.210s 00:28:02.785 05:25:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.785 ************************************ 00:28:02.785 END TEST bdev_hello_world 00:28:02.785 ************************************ 00:28:02.785 05:25:21 -- common/autotest_common.sh@10 -- # set +x 00:28:02.785 05:25:21 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:28:02.785 05:25:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:02.785 05:25:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:02.785 05:25:21 -- common/autotest_common.sh@10 -- # set +x 00:28:02.785 ************************************ 00:28:02.785 START TEST bdev_bounds 00:28:02.785 ************************************ 00:28:02.785 05:25:21 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:28:02.785 05:25:21 -- bdev/blockdev.sh@288 -- # bdevio_pid=90745 00:28:02.786 05:25:21 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:02.786 Process bdevio pid: 90745 00:28:02.786 05:25:21 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 90745' 00:28:02.786 05:25:21 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:02.786 05:25:21 -- bdev/blockdev.sh@291 -- # waitforlisten 90745 00:28:02.786 05:25:21 -- common/autotest_common.sh@819 -- # '[' -z 90745 ']' 00:28:02.786 05:25:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.786 05:25:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:02.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.786 05:25:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.786 05:25:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:02.786 05:25:21 -- common/autotest_common.sh@10 -- # set +x 00:28:02.786 [2024-07-26 05:25:21.694721] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:02.786 [2024-07-26 05:25:21.695388] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90745 ] 00:28:02.786 [2024-07-26 05:25:21.864157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:03.045 [2024-07-26 05:25:22.025634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.045 [2024-07-26 05:25:22.025696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.045 [2024-07-26 05:25:22.025715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.611 05:25:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:03.611 05:25:22 -- common/autotest_common.sh@852 -- # return 0 00:28:03.611 05:25:22 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:03.611 I/O targets: 00:28:03.611 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:28:03.611 00:28:03.611 00:28:03.611 CUnit - A unit testing framework for C - Version 2.1-3 00:28:03.611 http://cunit.sourceforge.net/ 00:28:03.611 00:28:03.611 00:28:03.611 Suite: bdevio tests on: Nvme0n1 00:28:03.611 Test: blockdev write read block ...passed 00:28:03.611 Test: blockdev write zeroes read block ...passed 00:28:03.611 Test: blockdev write zeroes read no split ...passed 00:28:03.611 Test: blockdev write zeroes read split ...passed 00:28:03.869 Test: blockdev write zeroes read split partial ...passed 00:28:03.869 Test: blockdev reset ...[2024-07-26 05:25:22.725110] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:03.869 [2024-07-26 05:25:22.728667] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:03.869 passed 00:28:03.869 Test: blockdev write read 8 blocks ...passed 00:28:03.869 Test: blockdev write read size > 128k ...passed 00:28:03.869 Test: blockdev write read invalid size ...passed 00:28:03.869 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:03.869 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:03.869 Test: blockdev write read max offset ...passed 00:28:03.869 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:03.869 Test: blockdev writev readv 8 blocks ...passed 00:28:03.869 Test: blockdev writev readv 30 x 1block ...passed 00:28:03.869 Test: blockdev writev readv block ...passed 00:28:03.869 Test: blockdev writev readv size > 128k ...passed 00:28:03.869 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:03.869 Test: blockdev comparev and writev ...[2024-07-26 05:25:22.737901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29020d000 len:0x1000 00:28:03.869 [2024-07-26 05:25:22.738255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:03.869 passed 00:28:03.869 Test: blockdev nvme passthru rw ...passed 00:28:03.869 Test: blockdev nvme passthru vendor specific ...[2024-07-26 05:25:22.739435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:03.869 [2024-07-26 05:25:22.739692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:03.869 passed 00:28:03.869 Test: blockdev nvme admin passthru ...passed 00:28:03.869 Test: blockdev copy ...passed 00:28:03.869 00:28:03.869 Run Summary: Type Total Ran Passed Failed Inactive 00:28:03.869 suites 1 1 n/a 0 0 00:28:03.869 tests 23 23 23 0 0 00:28:03.869 asserts 152 152 152 0 n/a 00:28:03.869 00:28:03.869 Elapsed time = 0.190 seconds 00:28:03.869 0 00:28:03.869 05:25:22 -- bdev/blockdev.sh@293 -- # killprocess 90745 00:28:03.869 05:25:22 -- common/autotest_common.sh@926 -- # '[' -z 90745 ']' 00:28:03.869 05:25:22 -- common/autotest_common.sh@930 -- # kill -0 90745 00:28:03.869 05:25:22 -- common/autotest_common.sh@931 -- # uname 00:28:03.869 05:25:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:03.869 05:25:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90745 00:28:03.869 05:25:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:03.869 05:25:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:03.869 05:25:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90745' 00:28:03.869 killing process with pid 90745 00:28:03.869 05:25:22 -- common/autotest_common.sh@945 -- # kill 90745 00:28:03.869 05:25:22 -- common/autotest_common.sh@950 -- # wait 90745 00:28:04.803 05:25:23 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:28:04.803 00:28:04.803 real 0m2.029s 00:28:04.803 user 0m4.715s 00:28:04.803 sys 0m0.325s 00:28:04.803 05:25:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.803 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:28:04.803 ************************************ 00:28:04.803 END TEST bdev_bounds 00:28:04.803 ************************************ 00:28:04.803 05:25:23 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:28:04.803 05:25:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:28:04.803 05:25:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:04.803 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:28:04.803 ************************************ 00:28:04.803 START TEST bdev_nbd 00:28:04.803 ************************************ 00:28:04.803 05:25:23 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:28:04.803 05:25:23 -- bdev/blockdev.sh@298 -- # uname -s 00:28:04.803 05:25:23 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:28:04.803 05:25:23 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:04.803 05:25:23 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:04.803 05:25:23 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:28:04.804 05:25:23 -- bdev/blockdev.sh@302 -- # local bdev_all 00:28:04.804 05:25:23 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:28:04.804 05:25:23 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:28:04.804 05:25:23 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:04.804 05:25:23 -- bdev/blockdev.sh@309 -- # local nbd_all 00:28:04.804 05:25:23 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:28:04.804 05:25:23 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:28:04.804 05:25:23 -- bdev/blockdev.sh@312 -- # local nbd_list 00:28:04.804 05:25:23 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:28:04.804 05:25:23 -- bdev/blockdev.sh@313 -- # local bdev_list 00:28:04.804 05:25:23 -- bdev/blockdev.sh@316 -- # nbd_pid=90799 00:28:04.804 05:25:23 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:04.804 05:25:23 -- bdev/blockdev.sh@318 -- # waitforlisten 90799 /var/tmp/spdk-nbd.sock 00:28:04.804 05:25:23 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:04.804 05:25:23 -- common/autotest_common.sh@819 -- # '[' -z 90799 ']' 00:28:04.804 05:25:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:04.804 05:25:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:04.804 05:25:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:04.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:04.804 05:25:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:04.804 05:25:23 -- common/autotest_common.sh@10 -- # set +x 00:28:04.804 [2024-07-26 05:25:23.777554] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:04.804 [2024-07-26 05:25:23.777689] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.062 [2024-07-26 05:25:23.948640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.062 [2024-07-26 05:25:24.100492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.628 05:25:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:05.628 05:25:24 -- common/autotest_common.sh@852 -- # return 0 00:28:05.628 05:25:24 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@24 -- # local i 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:05.628 05:25:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:28:05.887 05:25:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:05.887 05:25:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:05.887 05:25:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:05.887 05:25:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:05.887 05:25:24 -- common/autotest_common.sh@857 -- # local i 00:28:05.887 05:25:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:05.887 05:25:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:05.887 05:25:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:05.887 05:25:24 -- common/autotest_common.sh@861 -- # break 00:28:05.887 05:25:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:05.887 05:25:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:05.887 05:25:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:05.887 1+0 records in 00:28:05.887 1+0 records out 00:28:05.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323987 s, 12.6 MB/s 00:28:05.887 05:25:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:05.887 05:25:24 -- common/autotest_common.sh@874 -- # size=4096 00:28:05.887 05:25:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:05.887 05:25:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:05.887 05:25:24 -- common/autotest_common.sh@877 -- # return 0 00:28:05.887 05:25:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:05.887 05:25:24 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:05.887 05:25:24 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:06.145 05:25:25 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:06.145 { 00:28:06.145 "nbd_device": "/dev/nbd0", 00:28:06.145 "bdev_name": "Nvme0n1" 00:28:06.145 } 00:28:06.145 ]' 00:28:06.145 05:25:25 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:06.145 05:25:25 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:06.145 05:25:25 -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:06.145 { 00:28:06.145 "nbd_device": "/dev/nbd0", 00:28:06.145 "bdev_name": "Nvme0n1" 00:28:06.145 } 00:28:06.145 ]' 00:28:06.145 05:25:25 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:06.145 05:25:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.145 05:25:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:06.145 05:25:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:06.145 05:25:25 -- bdev/nbd_common.sh@51 -- # local i 00:28:06.145 05:25:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:06.145 05:25:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@41 -- # break 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@45 -- # return 0 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:06.404 05:25:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:06.662 05:25:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:06.662 05:25:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:06.662 05:25:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:06.662 05:25:25 -- bdev/nbd_common.sh@65 -- # true 00:28:06.662 05:25:25 -- bdev/nbd_common.sh@65 -- # count=0 00:28:06.662 05:25:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:06.662 05:25:25 -- bdev/nbd_common.sh@122 -- # count=0 00:28:06.662 05:25:25 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:06.662 05:25:25 -- bdev/nbd_common.sh@127 -- # return 0 00:28:06.662 05:25:25 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:28:06.662 05:25:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.662 05:25:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@12 -- # local i 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:28:06.663 /dev/nbd0 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:06.663 05:25:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:06.663 05:25:25 -- common/autotest_common.sh@857 -- # local i 00:28:06.663 05:25:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:06.663 05:25:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:06.663 05:25:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:06.663 05:25:25 -- common/autotest_common.sh@861 -- # break 00:28:06.663 05:25:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:06.663 05:25:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:06.663 05:25:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:06.663 1+0 records in 00:28:06.663 1+0 records out 00:28:06.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526077 s, 7.8 MB/s 00:28:06.663 05:25:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:06.663 05:25:25 -- common/autotest_common.sh@874 -- # size=4096 00:28:06.663 05:25:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:06.663 05:25:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:06.663 05:25:25 -- common/autotest_common.sh@877 -- # return 0 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.663 05:25:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:06.921 { 00:28:06.921 "nbd_device": "/dev/nbd0", 00:28:06.921 "bdev_name": "Nvme0n1" 00:28:06.921 } 00:28:06.921 ]' 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:06.921 { 00:28:06.921 "nbd_device": "/dev/nbd0", 00:28:06.921 "bdev_name": "Nvme0n1" 00:28:06.921 } 00:28:06.921 ]' 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@65 -- # count=1 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@66 -- # echo 1 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@95 -- # count=1 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:06.921 05:25:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:06.921 256+0 records in 00:28:06.922 256+0 records out 00:28:06.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00717844 s, 146 MB/s 00:28:06.922 05:25:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:06.922 05:25:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:07.180 256+0 records in 00:28:07.180 256+0 records out 00:28:07.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0691199 s, 15.2 MB/s 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@51 -- # local i 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:07.180 05:25:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:07.438 05:25:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:07.438 05:25:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:07.438 05:25:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:07.438 05:25:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:07.438 05:25:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:07.438 05:25:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:07.438 05:25:26 -- bdev/nbd_common.sh@41 -- # break 00:28:07.438 05:25:26 -- bdev/nbd_common.sh@45 -- # return 0 00:28:07.438 05:25:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:07.438 05:25:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:07.438 05:25:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@65 -- # true 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@65 -- # count=0 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@104 -- # count=0 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@109 -- # return 0 00:28:07.697 05:25:26 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:28:07.697 05:25:26 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:07.955 malloc_lvol_verify 00:28:07.955 05:25:26 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:07.955 4530d35f-033d-48b3-90a0-3da0804e7162 00:28:08.214 05:25:27 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:08.214 cb6e686d-2d09-4ac8-8de5-e0a067b09dfe 00:28:08.214 05:25:27 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:08.472 /dev/nbd0 00:28:08.472 05:25:27 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:08.472 mke2fs 1.47.0 (5-Feb-2023) 00:28:08.472 00:28:08.472 Filesystem too small for a journal 00:28:08.472 Discarding device blocks: 0/1024 done 00:28:08.472 Creating filesystem with 1024 4k blocks and 1024 inodes 00:28:08.472 00:28:08.472 Allocating group tables: 0/1 done 00:28:08.472 Writing inode tables: 0/1 done 00:28:08.472 Writing superblocks and filesystem accounting information: 0/1 done 00:28:08.472 00:28:08.472 05:25:27 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:08.472 05:25:27 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:08.472 05:25:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:08.472 05:25:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:08.472 05:25:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:08.472 05:25:27 -- bdev/nbd_common.sh@51 -- # local i 00:28:08.472 05:25:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:08.472 05:25:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:08.730 05:25:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:08.730 05:25:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:08.730 05:25:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:08.730 05:25:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:08.730 05:25:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.730 05:25:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:08.730 05:25:27 -- bdev/nbd_common.sh@41 -- # break 00:28:08.730 05:25:27 -- bdev/nbd_common.sh@45 -- # return 0 00:28:08.730 05:25:27 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:08.730 05:25:27 -- bdev/nbd_common.sh@147 -- # return 0 00:28:08.730 05:25:27 -- bdev/blockdev.sh@324 -- # killprocess 90799 00:28:08.730 05:25:27 -- common/autotest_common.sh@926 -- # '[' -z 90799 ']' 00:28:08.730 05:25:27 -- common/autotest_common.sh@930 -- # kill -0 90799 00:28:08.730 05:25:27 -- common/autotest_common.sh@931 -- # uname 00:28:08.730 05:25:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:08.730 05:25:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90799 00:28:08.730 05:25:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:08.730 05:25:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:08.730 killing process with pid 90799 00:28:08.730 05:25:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90799' 00:28:08.730 05:25:27 -- common/autotest_common.sh@945 -- # kill 90799 00:28:08.730 05:25:27 -- common/autotest_common.sh@950 -- # wait 90799 00:28:09.666 05:25:28 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:28:09.666 00:28:09.666 real 0m4.993s 00:28:09.666 user 0m7.219s 00:28:09.666 sys 0m1.005s 00:28:09.666 05:25:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:09.666 05:25:28 -- common/autotest_common.sh@10 -- # set +x 00:28:09.666 ************************************ 00:28:09.666 END TEST bdev_nbd 00:28:09.666 ************************************ 00:28:09.666 05:25:28 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:28:09.666 05:25:28 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:28:09.666 skipping fio tests on NVMe due to multi-ns failures. 00:28:09.666 05:25:28 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:28:09.666 05:25:28 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:09.666 05:25:28 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:09.666 05:25:28 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:09.666 05:25:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:09.666 05:25:28 -- common/autotest_common.sh@10 -- # set +x 00:28:09.666 ************************************ 00:28:09.666 START TEST bdev_verify 00:28:09.666 ************************************ 00:28:09.666 05:25:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:09.925 [2024-07-26 05:25:28.813879] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:09.925 [2024-07-26 05:25:28.814033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90975 ] 00:28:09.925 [2024-07-26 05:25:28.968932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:10.183 [2024-07-26 05:25:29.117406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.183 [2024-07-26 05:25:29.117430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.442 Running I/O for 5 seconds... 00:28:15.755 00:28:15.755 Latency(us) 00:28:15.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.755 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:15.755 Verification LBA range: start 0x0 length 0xa0000 00:28:15.755 Nvme0n1 : 5.01 17357.53 67.80 0.00 0.00 7341.38 357.47 12273.11 00:28:15.755 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:15.755 Verification LBA range: start 0xa0000 length 0xa0000 00:28:15.755 Nvme0n1 : 5.01 17352.95 67.78 0.00 0.00 7343.03 476.63 14120.03 00:28:15.755 =================================================================================================================== 00:28:15.755 Total : 34710.48 135.59 0.00 0.00 7342.21 357.47 14120.03 00:28:23.913 00:28:23.913 real 0m14.152s 00:28:23.913 user 0m27.221s 00:28:23.913 sys 0m0.275s 00:28:23.913 05:25:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:23.913 05:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:23.913 ************************************ 00:28:23.913 END TEST bdev_verify 00:28:23.913 ************************************ 00:28:23.913 05:25:42 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:23.913 05:25:42 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:23.913 05:25:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:23.913 05:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:23.914 ************************************ 00:28:23.914 START TEST bdev_verify_big_io 00:28:23.914 ************************************ 00:28:23.914 05:25:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:23.914 [2024-07-26 05:25:43.006490] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:23.914 [2024-07-26 05:25:43.007117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91136 ] 00:28:24.172 [2024-07-26 05:25:43.161667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:24.430 [2024-07-26 05:25:43.312488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.431 [2024-07-26 05:25:43.312489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.689 Running I/O for 5 seconds... 00:28:29.961 00:28:29.961 Latency(us) 00:28:29.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.961 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:29.961 Verification LBA range: start 0x0 length 0xa000 00:28:29.961 Nvme0n1 : 5.04 1641.82 102.61 0.00 0.00 76803.42 439.39 119156.36 00:28:29.961 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:29.961 Verification LBA range: start 0xa000 length 0xa000 00:28:29.961 Nvme0n1 : 5.04 1835.69 114.73 0.00 0.00 68766.84 580.89 108670.60 00:28:29.961 =================================================================================================================== 00:28:29.961 Total : 3477.51 217.34 0.00 0.00 72562.63 439.39 119156.36 00:28:30.897 00:28:30.897 real 0m7.007s 00:28:30.897 user 0m13.013s 00:28:30.897 sys 0m0.203s 00:28:30.897 05:25:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.897 05:25:49 -- common/autotest_common.sh@10 -- # set +x 00:28:30.897 ************************************ 00:28:30.897 END TEST bdev_verify_big_io 00:28:30.897 ************************************ 00:28:31.156 05:25:50 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:31.156 05:25:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:31.156 05:25:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:31.156 05:25:50 -- common/autotest_common.sh@10 -- # set +x 00:28:31.156 ************************************ 00:28:31.156 START TEST bdev_write_zeroes 00:28:31.156 ************************************ 00:28:31.156 05:25:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:31.156 [2024-07-26 05:25:50.068000] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:31.156 [2024-07-26 05:25:50.068770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91228 ] 00:28:31.156 [2024-07-26 05:25:50.225954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.415 [2024-07-26 05:25:50.433107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.675 Running I/O for 1 seconds... 00:28:33.047 00:28:33.047 Latency(us) 00:28:33.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.047 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:33.047 Nvme0n1 : 1.00 56310.50 219.96 0.00 0.00 2267.02 945.80 6494.02 00:28:33.047 =================================================================================================================== 00:28:33.047 Total : 56310.50 219.96 0.00 0.00 2267.02 945.80 6494.02 00:28:33.614 00:28:33.614 real 0m2.689s 00:28:33.614 user 0m2.405s 00:28:33.614 sys 0m0.183s 00:28:33.614 05:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.614 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:28:33.614 ************************************ 00:28:33.614 END TEST bdev_write_zeroes 00:28:33.614 ************************************ 00:28:33.873 05:25:52 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:33.873 05:25:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:33.873 05:25:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:33.873 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:28:33.873 ************************************ 00:28:33.873 START TEST bdev_json_nonenclosed 00:28:33.873 ************************************ 00:28:33.873 05:25:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:33.873 [2024-07-26 05:25:52.820403] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:33.873 [2024-07-26 05:25:52.820575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91277 ] 00:28:34.131 [2024-07-26 05:25:52.990043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.131 [2024-07-26 05:25:53.138611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.131 [2024-07-26 05:25:53.138805] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:34.131 [2024-07-26 05:25:53.138834] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:34.388 00:28:34.388 real 0m0.717s 00:28:34.388 user 0m0.499s 00:28:34.388 sys 0m0.118s 00:28:34.388 05:25:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:34.388 05:25:53 -- common/autotest_common.sh@10 -- # set +x 00:28:34.388 ************************************ 00:28:34.388 END TEST bdev_json_nonenclosed 00:28:34.388 ************************************ 00:28:34.646 05:25:53 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:34.646 05:25:53 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:34.646 05:25:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:34.646 05:25:53 -- common/autotest_common.sh@10 -- # set +x 00:28:34.646 ************************************ 00:28:34.646 START TEST bdev_json_nonarray 00:28:34.646 ************************************ 00:28:34.646 05:25:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:34.646 [2024-07-26 05:25:53.572344] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:34.646 [2024-07-26 05:25:53.572493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91298 ] 00:28:34.646 [2024-07-26 05:25:53.721075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.905 [2024-07-26 05:25:53.870645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.905 [2024-07-26 05:25:53.870838] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:34.905 [2024-07-26 05:25:53.870865] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:35.163 00:28:35.164 real 0m0.694s 00:28:35.164 user 0m0.486s 00:28:35.164 sys 0m0.108s 00:28:35.164 05:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.164 05:25:54 -- common/autotest_common.sh@10 -- # set +x 00:28:35.164 ************************************ 00:28:35.164 END TEST bdev_json_nonarray 00:28:35.164 ************************************ 00:28:35.164 05:25:54 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:28:35.164 05:25:54 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:28:35.164 05:25:54 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:28:35.164 05:25:54 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:28:35.164 05:25:54 -- bdev/blockdev.sh@809 -- # cleanup 00:28:35.164 05:25:54 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:35.164 05:25:54 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:35.164 05:25:54 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:28:35.164 05:25:54 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:28:35.164 05:25:54 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:28:35.164 05:25:54 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:28:35.164 00:28:35.164 real 0m37.356s 00:28:35.164 user 1m0.059s 00:28:35.164 sys 0m3.230s 00:28:35.164 05:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.164 05:25:54 -- common/autotest_common.sh@10 -- # set +x 00:28:35.164 ************************************ 00:28:35.164 END TEST blockdev_nvme 00:28:35.164 ************************************ 00:28:35.423 05:25:54 -- spdk/autotest.sh@219 -- # uname -s 00:28:35.423 05:25:54 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:28:35.423 05:25:54 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:28:35.423 05:25:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:35.423 05:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:35.423 05:25:54 -- common/autotest_common.sh@10 -- # set +x 00:28:35.423 ************************************ 00:28:35.423 START TEST blockdev_nvme_gpt 00:28:35.423 ************************************ 00:28:35.423 05:25:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:28:35.423 * Looking for test storage... 00:28:35.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:35.423 05:25:54 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:35.423 05:25:54 -- bdev/nbd_common.sh@6 -- # set -e 00:28:35.423 05:25:54 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:35.423 05:25:54 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:35.423 05:25:54 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:35.423 05:25:54 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:35.423 05:25:54 -- bdev/blockdev.sh@18 -- # : 00:28:35.423 05:25:54 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:28:35.423 05:25:54 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:28:35.423 05:25:54 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:28:35.423 05:25:54 -- bdev/blockdev.sh@672 -- # uname -s 00:28:35.423 05:25:54 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:28:35.423 05:25:54 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:28:35.423 05:25:54 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:28:35.423 05:25:54 -- bdev/blockdev.sh@681 -- # crypto_device= 00:28:35.423 05:25:54 -- bdev/blockdev.sh@682 -- # dek= 00:28:35.423 05:25:54 -- bdev/blockdev.sh@683 -- # env_ctx= 00:28:35.423 05:25:54 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:28:35.423 05:25:54 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:28:35.423 05:25:54 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:28:35.423 05:25:54 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:28:35.423 05:25:54 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:28:35.423 05:25:54 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=91373 00:28:35.423 05:25:54 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:35.423 05:25:54 -- bdev/blockdev.sh@47 -- # waitforlisten 91373 00:28:35.423 05:25:54 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:35.423 05:25:54 -- common/autotest_common.sh@819 -- # '[' -z 91373 ']' 00:28:35.423 05:25:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.423 05:25:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:35.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.423 05:25:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.423 05:25:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:35.423 05:25:54 -- common/autotest_common.sh@10 -- # set +x 00:28:35.423 [2024-07-26 05:25:54.482125] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:35.423 [2024-07-26 05:25:54.482291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91373 ] 00:28:35.682 [2024-07-26 05:25:54.654148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.941 [2024-07-26 05:25:54.807488] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:35.941 [2024-07-26 05:25:54.807727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.509 05:25:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:36.509 05:25:55 -- common/autotest_common.sh@852 -- # return 0 00:28:36.509 05:25:55 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:28:36.509 05:25:55 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:28:36.509 05:25:55 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:36.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:28:36.768 Waiting for block devices as requested 00:28:36.768 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:36.768 05:25:55 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:28:36.768 05:25:55 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:28:36.768 05:25:55 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:28:36.768 05:25:55 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:28:36.768 05:25:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:28:36.768 05:25:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:28:36.768 05:25:55 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:28:36.768 05:25:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:36.768 05:25:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:28:36.768 05:25:55 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:28:36.768 05:25:55 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:28:36.768 05:25:55 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:28:36.768 05:25:55 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:28:36.768 05:25:55 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:28:36.768 05:25:55 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:28:36.768 05:25:55 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:28:37.027 05:25:55 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:28:37.027 BYT; 00:28:37.027 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:28:37.027 05:25:55 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:28:37.027 BYT; 00:28:37.027 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:28:37.027 05:25:55 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:28:37.027 05:25:55 -- bdev/blockdev.sh@114 -- # break 00:28:37.027 05:25:55 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:28:37.027 05:25:55 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:28:37.027 05:25:55 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:28:37.027 05:25:55 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:28:37.027 05:25:56 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:28:37.027 05:25:56 -- scripts/common.sh@410 -- # local spdk_guid 00:28:37.027 05:25:56 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:28:37.027 05:25:56 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:37.027 05:25:56 -- scripts/common.sh@415 -- # IFS='()' 00:28:37.027 05:25:56 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:28:37.027 05:25:56 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:37.027 05:25:56 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:28:37.027 05:25:56 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:37.027 05:25:56 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:37.027 05:25:56 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:37.027 05:25:56 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:28:37.027 05:25:56 -- scripts/common.sh@422 -- # local spdk_guid 00:28:37.027 05:25:56 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:28:37.027 05:25:56 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:37.027 05:25:56 -- scripts/common.sh@427 -- # IFS='()' 00:28:37.027 05:25:56 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:28:37.027 05:25:56 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:37.027 05:25:56 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:28:37.027 05:25:56 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:37.027 05:25:56 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:37.027 05:25:56 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:37.027 05:25:56 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:28:38.404 The operation has completed successfully. 00:28:38.404 05:25:57 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:28:39.340 The operation has completed successfully. 00:28:39.340 05:25:58 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:39.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:28:39.609 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:39.903 05:25:58 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:28:39.903 05:25:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.903 05:25:58 -- common/autotest_common.sh@10 -- # set +x 00:28:40.175 [] 00:28:40.175 05:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.175 05:25:59 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:28:40.175 05:25:59 -- bdev/blockdev.sh@79 -- # local json 00:28:40.175 05:25:59 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:28:40.175 05:25:59 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:40.175 05:25:59 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:28:40.175 05:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.175 05:25:59 -- common/autotest_common.sh@10 -- # set +x 00:28:40.176 05:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.176 05:25:59 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:28:40.176 05:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.176 05:25:59 -- common/autotest_common.sh@10 -- # set +x 00:28:40.176 05:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.176 05:25:59 -- bdev/blockdev.sh@738 -- # cat 00:28:40.176 05:25:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:28:40.176 05:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.176 05:25:59 -- common/autotest_common.sh@10 -- # set +x 00:28:40.176 05:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.176 05:25:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:28:40.176 05:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.176 05:25:59 -- common/autotest_common.sh@10 -- # set +x 00:28:40.176 05:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.176 05:25:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:40.176 05:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.176 05:25:59 -- common/autotest_common.sh@10 -- # set +x 00:28:40.176 05:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.176 05:25:59 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:28:40.176 05:25:59 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:28:40.176 05:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.176 05:25:59 -- common/autotest_common.sh@10 -- # set +x 00:28:40.176 05:25:59 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:28:40.176 05:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.176 05:25:59 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:28:40.176 05:25:59 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:28:40.176 05:25:59 -- bdev/blockdev.sh@747 -- # jq -r .name 00:28:40.176 05:25:59 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:28:40.176 05:25:59 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:28:40.176 05:25:59 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:28:40.176 05:25:59 -- bdev/blockdev.sh@752 -- # killprocess 91373 00:28:40.176 05:25:59 -- common/autotest_common.sh@926 -- # '[' -z 91373 ']' 00:28:40.176 05:25:59 -- common/autotest_common.sh@930 -- # kill -0 91373 00:28:40.176 05:25:59 -- common/autotest_common.sh@931 -- # uname 00:28:40.176 05:25:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:40.176 05:25:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91373 00:28:40.176 05:25:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:40.176 05:25:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:40.176 05:25:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91373' 00:28:40.176 killing process with pid 91373 00:28:40.176 05:25:59 -- common/autotest_common.sh@945 -- # kill 91373 00:28:40.176 05:25:59 -- common/autotest_common.sh@950 -- # wait 91373 00:28:42.079 05:26:00 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:42.079 05:26:00 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:28:42.079 05:26:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:28:42.079 05:26:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:42.079 05:26:00 -- common/autotest_common.sh@10 -- # set +x 00:28:42.079 ************************************ 00:28:42.079 START TEST bdev_hello_world 00:28:42.079 ************************************ 00:28:42.079 05:26:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:28:42.079 [2024-07-26 05:26:01.028308] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:42.079 [2024-07-26 05:26:01.028466] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91763 ] 00:28:42.338 [2024-07-26 05:26:01.197335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.338 [2024-07-26 05:26:01.348528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.597 [2024-07-26 05:26:01.698914] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:42.597 [2024-07-26 05:26:01.698996] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:28:42.597 [2024-07-26 05:26:01.699032] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:42.597 [2024-07-26 05:26:01.701729] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:42.597 [2024-07-26 05:26:01.702388] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:42.597 [2024-07-26 05:26:01.702446] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:42.597 [2024-07-26 05:26:01.702730] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:42.597 00:28:42.597 [2024-07-26 05:26:01.702761] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:43.533 00:28:43.533 real 0m1.640s 00:28:43.533 user 0m1.328s 00:28:43.533 sys 0m0.210s 00:28:43.533 05:26:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.533 05:26:02 -- common/autotest_common.sh@10 -- # set +x 00:28:43.533 ************************************ 00:28:43.533 END TEST bdev_hello_world 00:28:43.533 ************************************ 00:28:43.791 05:26:02 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:28:43.791 05:26:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:43.791 05:26:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:43.791 05:26:02 -- common/autotest_common.sh@10 -- # set +x 00:28:43.791 ************************************ 00:28:43.791 START TEST bdev_bounds 00:28:43.791 ************************************ 00:28:43.791 05:26:02 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:28:43.791 Process bdevio pid: 91794 00:28:43.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.791 05:26:02 -- bdev/blockdev.sh@288 -- # bdevio_pid=91794 00:28:43.791 05:26:02 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:43.791 05:26:02 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 91794' 00:28:43.791 05:26:02 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:43.791 05:26:02 -- bdev/blockdev.sh@291 -- # waitforlisten 91794 00:28:43.791 05:26:02 -- common/autotest_common.sh@819 -- # '[' -z 91794 ']' 00:28:43.791 05:26:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.792 05:26:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:43.792 05:26:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.792 05:26:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:43.792 05:26:02 -- common/autotest_common.sh@10 -- # set +x 00:28:43.792 [2024-07-26 05:26:02.728303] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:43.792 [2024-07-26 05:26:02.728462] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91794 ] 00:28:43.792 [2024-07-26 05:26:02.897664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.050 [2024-07-26 05:26:03.055905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.050 [2024-07-26 05:26:03.056077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.050 [2024-07-26 05:26:03.056095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.616 05:26:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:44.616 05:26:03 -- common/autotest_common.sh@852 -- # return 0 00:28:44.616 05:26:03 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:44.873 I/O targets: 00:28:44.873 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:28:44.873 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:28:44.873 00:28:44.873 00:28:44.873 CUnit - A unit testing framework for C - Version 2.1-3 00:28:44.873 http://cunit.sourceforge.net/ 00:28:44.873 00:28:44.873 00:28:44.873 Suite: bdevio tests on: Nvme0n1p2 00:28:44.873 Test: blockdev write read block ...passed 00:28:44.873 Test: blockdev write zeroes read block ...passed 00:28:44.873 Test: blockdev write zeroes read no split ...passed 00:28:44.873 Test: blockdev write zeroes read split ...passed 00:28:44.873 Test: blockdev write zeroes read split partial ...passed 00:28:44.874 Test: blockdev reset ...[2024-07-26 05:26:03.799670] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:44.874 passed 00:28:44.874 Test: blockdev write read 8 blocks ...[2024-07-26 05:26:03.802920] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:44.874 passed 00:28:44.874 Test: blockdev write read size > 128k ...passed 00:28:44.874 Test: blockdev write read invalid size ...passed 00:28:44.874 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:44.874 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:44.874 Test: blockdev write read max offset ...passed 00:28:44.874 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:44.874 Test: blockdev writev readv 8 blocks ...passed 00:28:44.874 Test: blockdev writev readv 30 x 1block ...passed 00:28:44.874 Test: blockdev writev readv block ...passed 00:28:44.874 Test: blockdev writev readv size > 128k ...passed 00:28:44.874 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:44.874 Test: blockdev comparev and writev ...[2024-07-26 05:26:03.812478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x28e00b000 len:0x1000 00:28:44.874 [2024-07-26 05:26:03.812556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:44.874 passed 00:28:44.874 Test: blockdev nvme passthru rw ...passed 00:28:44.874 Test: blockdev nvme passthru vendor specific ...passed 00:28:44.874 Test: blockdev nvme admin passthru ...passed 00:28:44.874 Test: blockdev copy ...passed 00:28:44.874 Suite: bdevio tests on: Nvme0n1p1 00:28:44.874 Test: blockdev write read block ...passed 00:28:44.874 Test: blockdev write zeroes read block ...passed 00:28:44.874 Test: blockdev write zeroes read no split ...passed 00:28:44.874 Test: blockdev write zeroes read split ...passed 00:28:44.874 Test: blockdev write zeroes read split partial ...passed 00:28:44.874 Test: blockdev reset ...[2024-07-26 05:26:03.867088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:44.874 [2024-07-26 05:26:03.870478] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:44.874 passed 00:28:44.874 Test: blockdev write read 8 blocks ...passed 00:28:44.874 Test: blockdev write read size > 128k ...passed 00:28:44.874 Test: blockdev write read invalid size ...passed 00:28:44.874 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:44.874 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:44.874 Test: blockdev write read max offset ...passed 00:28:44.874 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:44.874 Test: blockdev writev readv 8 blocks ...passed 00:28:44.874 Test: blockdev writev readv 30 x 1block ...passed 00:28:44.874 Test: blockdev writev readv block ...passed 00:28:44.874 Test: blockdev writev readv size > 128k ...passed 00:28:44.874 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:44.874 Test: blockdev comparev and writev ...[2024-07-26 05:26:03.880595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x28e00d000 len:0x1000 00:28:44.874 [2024-07-26 05:26:03.880664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:44.874 passed 00:28:44.874 Test: blockdev nvme passthru rw ...passed 00:28:44.874 Test: blockdev nvme passthru vendor specific ...passed 00:28:44.874 Test: blockdev nvme admin passthru ...passed 00:28:44.874 Test: blockdev copy ...passed 00:28:44.874 00:28:44.874 Run Summary: Type Total Ran Passed Failed Inactive 00:28:44.874 suites 2 2 n/a 0 0 00:28:44.874 tests 46 46 46 0 0 00:28:44.874 asserts 284 284 284 0 n/a 00:28:44.874 00:28:44.874 Elapsed time = 0.362 seconds 00:28:44.874 0 00:28:44.874 05:26:03 -- bdev/blockdev.sh@293 -- # killprocess 91794 00:28:44.874 05:26:03 -- common/autotest_common.sh@926 -- # '[' -z 91794 ']' 00:28:44.874 05:26:03 -- common/autotest_common.sh@930 -- # kill -0 91794 00:28:44.874 05:26:03 -- common/autotest_common.sh@931 -- # uname 00:28:44.874 05:26:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:44.874 05:26:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91794 00:28:44.874 killing process with pid 91794 00:28:44.874 05:26:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:44.874 05:26:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:44.874 05:26:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91794' 00:28:44.874 05:26:03 -- common/autotest_common.sh@945 -- # kill 91794 00:28:44.874 05:26:03 -- common/autotest_common.sh@950 -- # wait 91794 00:28:45.808 ************************************ 00:28:45.808 END TEST bdev_bounds 00:28:45.808 ************************************ 00:28:45.808 05:26:04 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:28:45.808 00:28:45.808 real 0m2.210s 00:28:45.808 user 0m5.317s 00:28:45.808 sys 0m0.315s 00:28:45.808 05:26:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:45.808 05:26:04 -- common/autotest_common.sh@10 -- # set +x 00:28:45.808 05:26:04 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:28:45.808 05:26:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:28:45.808 05:26:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:45.808 05:26:04 -- common/autotest_common.sh@10 -- # set +x 00:28:46.067 ************************************ 00:28:46.067 START TEST bdev_nbd 00:28:46.067 ************************************ 00:28:46.067 05:26:04 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:28:46.067 05:26:04 -- bdev/blockdev.sh@298 -- # uname -s 00:28:46.067 05:26:04 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:28:46.067 05:26:04 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:46.067 05:26:04 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:46.067 05:26:04 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:28:46.067 05:26:04 -- bdev/blockdev.sh@302 -- # local bdev_all 00:28:46.067 05:26:04 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:28:46.067 05:26:04 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:28:46.067 05:26:04 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:46.067 05:26:04 -- bdev/blockdev.sh@309 -- # local nbd_all 00:28:46.067 05:26:04 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:28:46.067 05:26:04 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:46.067 05:26:04 -- bdev/blockdev.sh@312 -- # local nbd_list 00:28:46.067 05:26:04 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:46.067 05:26:04 -- bdev/blockdev.sh@313 -- # local bdev_list 00:28:46.067 05:26:04 -- bdev/blockdev.sh@316 -- # nbd_pid=91854 00:28:46.067 05:26:04 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:46.067 05:26:04 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:46.067 05:26:04 -- bdev/blockdev.sh@318 -- # waitforlisten 91854 /var/tmp/spdk-nbd.sock 00:28:46.067 05:26:04 -- common/autotest_common.sh@819 -- # '[' -z 91854 ']' 00:28:46.067 05:26:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:46.067 05:26:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:46.067 05:26:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:46.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:46.067 05:26:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:46.067 05:26:04 -- common/autotest_common.sh@10 -- # set +x 00:28:46.067 [2024-07-26 05:26:04.994038] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:46.067 [2024-07-26 05:26:04.994207] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.067 [2024-07-26 05:26:05.165708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.326 [2024-07-26 05:26:05.323155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.893 05:26:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:46.893 05:26:05 -- common/autotest_common.sh@852 -- # return 0 00:28:46.893 05:26:05 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@24 -- # local i 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:46.893 05:26:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:28:47.152 05:26:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:47.152 05:26:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:47.152 05:26:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:47.152 05:26:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:47.152 05:26:06 -- common/autotest_common.sh@857 -- # local i 00:28:47.152 05:26:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:47.152 05:26:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:47.152 05:26:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:47.152 05:26:06 -- common/autotest_common.sh@861 -- # break 00:28:47.152 05:26:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:47.152 05:26:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:47.152 05:26:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:47.152 1+0 records in 00:28:47.152 1+0 records out 00:28:47.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483288 s, 8.5 MB/s 00:28:47.152 05:26:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:47.152 05:26:06 -- common/autotest_common.sh@874 -- # size=4096 00:28:47.152 05:26:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:47.152 05:26:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:47.152 05:26:06 -- common/autotest_common.sh@877 -- # return 0 00:28:47.152 05:26:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:47.152 05:26:06 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:47.152 05:26:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:28:47.410 05:26:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:28:47.410 05:26:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:28:47.410 05:26:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:28:47.410 05:26:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:28:47.410 05:26:06 -- common/autotest_common.sh@857 -- # local i 00:28:47.410 05:26:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:47.410 05:26:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:47.410 05:26:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:28:47.410 05:26:06 -- common/autotest_common.sh@861 -- # break 00:28:47.410 05:26:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:47.410 05:26:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:47.410 05:26:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:47.410 1+0 records in 00:28:47.410 1+0 records out 00:28:47.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606001 s, 6.8 MB/s 00:28:47.410 05:26:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:47.410 05:26:06 -- common/autotest_common.sh@874 -- # size=4096 00:28:47.410 05:26:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:47.411 05:26:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:47.411 05:26:06 -- common/autotest_common.sh@877 -- # return 0 00:28:47.411 05:26:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:47.411 05:26:06 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:47.411 05:26:06 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:47.669 { 00:28:47.669 "nbd_device": "/dev/nbd0", 00:28:47.669 "bdev_name": "Nvme0n1p1" 00:28:47.669 }, 00:28:47.669 { 00:28:47.669 "nbd_device": "/dev/nbd1", 00:28:47.669 "bdev_name": "Nvme0n1p2" 00:28:47.669 } 00:28:47.669 ]' 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:47.669 { 00:28:47.669 "nbd_device": "/dev/nbd0", 00:28:47.669 "bdev_name": "Nvme0n1p1" 00:28:47.669 }, 00:28:47.669 { 00:28:47.669 "nbd_device": "/dev/nbd1", 00:28:47.669 "bdev_name": "Nvme0n1p2" 00:28:47.669 } 00:28:47.669 ]' 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@51 -- # local i 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@41 -- # break 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@45 -- # return 0 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:47.669 05:26:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:47.927 05:26:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:47.927 05:26:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:47.927 05:26:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:47.927 05:26:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:47.927 05:26:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:47.927 05:26:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:47.927 05:26:06 -- bdev/nbd_common.sh@41 -- # break 00:28:47.927 05:26:06 -- bdev/nbd_common.sh@45 -- # return 0 00:28:47.927 05:26:06 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:47.927 05:26:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:47.927 05:26:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@65 -- # true 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@65 -- # count=0 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@122 -- # count=0 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@127 -- # return 0 00:28:48.186 05:26:07 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@12 -- # local i 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:48.186 05:26:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:28:48.444 /dev/nbd0 00:28:48.444 05:26:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:48.444 05:26:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:48.444 05:26:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:48.444 05:26:07 -- common/autotest_common.sh@857 -- # local i 00:28:48.444 05:26:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:48.444 05:26:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:48.444 05:26:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:48.444 05:26:07 -- common/autotest_common.sh@861 -- # break 00:28:48.444 05:26:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:48.444 05:26:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:48.444 05:26:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:48.444 1+0 records in 00:28:48.444 1+0 records out 00:28:48.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557315 s, 7.3 MB/s 00:28:48.444 05:26:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:48.444 05:26:07 -- common/autotest_common.sh@874 -- # size=4096 00:28:48.444 05:26:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:48.444 05:26:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:48.444 05:26:07 -- common/autotest_common.sh@877 -- # return 0 00:28:48.444 05:26:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:48.444 05:26:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:48.444 05:26:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:28:48.703 /dev/nbd1 00:28:48.703 05:26:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:48.703 05:26:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:48.703 05:26:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:28:48.703 05:26:07 -- common/autotest_common.sh@857 -- # local i 00:28:48.703 05:26:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:48.703 05:26:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:48.703 05:26:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:28:48.703 05:26:07 -- common/autotest_common.sh@861 -- # break 00:28:48.703 05:26:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:48.703 05:26:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:48.703 05:26:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:48.703 1+0 records in 00:28:48.703 1+0 records out 00:28:48.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557058 s, 7.4 MB/s 00:28:48.703 05:26:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:48.703 05:26:07 -- common/autotest_common.sh@874 -- # size=4096 00:28:48.703 05:26:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:48.703 05:26:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:48.703 05:26:07 -- common/autotest_common.sh@877 -- # return 0 00:28:48.703 05:26:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:48.703 05:26:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:48.703 05:26:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:48.703 05:26:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:48.703 05:26:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:48.962 { 00:28:48.962 "nbd_device": "/dev/nbd0", 00:28:48.962 "bdev_name": "Nvme0n1p1" 00:28:48.962 }, 00:28:48.962 { 00:28:48.962 "nbd_device": "/dev/nbd1", 00:28:48.962 "bdev_name": "Nvme0n1p2" 00:28:48.962 } 00:28:48.962 ]' 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:48.962 { 00:28:48.962 "nbd_device": "/dev/nbd0", 00:28:48.962 "bdev_name": "Nvme0n1p1" 00:28:48.962 }, 00:28:48.962 { 00:28:48.962 "nbd_device": "/dev/nbd1", 00:28:48.962 "bdev_name": "Nvme0n1p2" 00:28:48.962 } 00:28:48.962 ]' 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:48.962 /dev/nbd1' 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:48.962 /dev/nbd1' 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@65 -- # count=2 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@66 -- # echo 2 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@95 -- # count=2 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:48.962 256+0 records in 00:28:48.962 256+0 records out 00:28:48.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00797865 s, 131 MB/s 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:48.962 05:26:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:48.962 256+0 records in 00:28:48.962 256+0 records out 00:28:48.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.09091 s, 11.5 MB/s 00:28:48.962 05:26:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:48.962 05:26:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:49.221 256+0 records in 00:28:49.221 256+0 records out 00:28:49.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117203 s, 8.9 MB/s 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@51 -- # local i 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:49.221 05:26:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:49.479 05:26:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:49.479 05:26:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:49.479 05:26:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:49.479 05:26:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:49.479 05:26:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:49.479 05:26:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:49.479 05:26:08 -- bdev/nbd_common.sh@41 -- # break 00:28:49.479 05:26:08 -- bdev/nbd_common.sh@45 -- # return 0 00:28:49.479 05:26:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:49.479 05:26:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:49.738 05:26:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:49.738 05:26:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:49.738 05:26:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:49.738 05:26:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:49.738 05:26:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:49.738 05:26:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:49.738 05:26:08 -- bdev/nbd_common.sh@41 -- # break 00:28:49.738 05:26:08 -- bdev/nbd_common.sh@45 -- # return 0 00:28:49.738 05:26:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:49.738 05:26:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:49.738 05:26:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@65 -- # true 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@65 -- # count=0 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@104 -- # count=0 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@109 -- # return 0 00:28:49.997 05:26:08 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:28:49.997 05:26:08 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:50.256 malloc_lvol_verify 00:28:50.256 05:26:09 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:50.515 9120eca2-df4f-42fd-8241-026a23c23393 00:28:50.515 05:26:09 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:50.774 db7664e0-0aa2-4c1f-b2ab-70573d260fee 00:28:50.774 05:26:09 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:50.774 /dev/nbd0 00:28:50.774 05:26:09 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:50.774 mke2fs 1.47.0 (5-Feb-2023) 00:28:50.774 00:28:50.774 Filesystem too small for a journal 00:28:50.774 Discarding device blocks: 0/1024 done 00:28:50.774 Creating filesystem with 1024 4k blocks and 1024 inodes 00:28:50.774 00:28:50.774 Allocating group tables: 0/1 done 00:28:50.774 Writing inode tables: 0/1 done 00:28:50.774 Writing superblocks and filesystem accounting information: 0/1 done 00:28:50.774 00:28:50.774 05:26:09 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:50.774 05:26:09 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:50.774 05:26:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:50.774 05:26:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:50.774 05:26:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:50.774 05:26:09 -- bdev/nbd_common.sh@51 -- # local i 00:28:50.774 05:26:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:50.774 05:26:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:51.032 05:26:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:51.032 05:26:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:51.032 05:26:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:51.032 05:26:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:51.033 05:26:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:51.033 05:26:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:51.033 05:26:10 -- bdev/nbd_common.sh@41 -- # break 00:28:51.033 05:26:10 -- bdev/nbd_common.sh@45 -- # return 0 00:28:51.033 05:26:10 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:51.033 05:26:10 -- bdev/nbd_common.sh@147 -- # return 0 00:28:51.033 05:26:10 -- bdev/blockdev.sh@324 -- # killprocess 91854 00:28:51.033 05:26:10 -- common/autotest_common.sh@926 -- # '[' -z 91854 ']' 00:28:51.033 05:26:10 -- common/autotest_common.sh@930 -- # kill -0 91854 00:28:51.033 05:26:10 -- common/autotest_common.sh@931 -- # uname 00:28:51.033 05:26:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:51.033 05:26:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91854 00:28:51.033 05:26:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:51.033 05:26:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:51.033 killing process with pid 91854 00:28:51.033 05:26:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91854' 00:28:51.033 05:26:10 -- common/autotest_common.sh@945 -- # kill 91854 00:28:51.033 05:26:10 -- common/autotest_common.sh@950 -- # wait 91854 00:28:52.411 05:26:11 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:28:52.411 00:28:52.411 real 0m6.162s 00:28:52.411 user 0m8.948s 00:28:52.411 sys 0m1.408s 00:28:52.411 05:26:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:52.411 05:26:11 -- common/autotest_common.sh@10 -- # set +x 00:28:52.411 ************************************ 00:28:52.411 END TEST bdev_nbd 00:28:52.411 ************************************ 00:28:52.411 05:26:11 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:28:52.411 05:26:11 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:28:52.411 05:26:11 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:28:52.411 skipping fio tests on NVMe due to multi-ns failures. 00:28:52.411 05:26:11 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:28:52.411 05:26:11 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:52.411 05:26:11 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:52.411 05:26:11 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:52.411 05:26:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:52.411 05:26:11 -- common/autotest_common.sh@10 -- # set +x 00:28:52.411 ************************************ 00:28:52.411 START TEST bdev_verify 00:28:52.411 ************************************ 00:28:52.411 05:26:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:52.411 [2024-07-26 05:26:11.196408] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:52.411 [2024-07-26 05:26:11.196550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92086 ] 00:28:52.411 [2024-07-26 05:26:11.349285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:52.411 [2024-07-26 05:26:11.499438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.411 [2024-07-26 05:26:11.499459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.979 Running I/O for 5 seconds... 00:28:58.271 00:28:58.271 Latency(us) 00:28:58.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.271 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:58.271 Verification LBA range: start 0x0 length 0x4ff80 00:28:58.271 Nvme0n1p1 : 5.02 7612.80 29.74 0.00 0.00 16769.96 1630.95 27882.59 00:28:58.272 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:58.272 Verification LBA range: start 0x4ff80 length 0x4ff80 00:28:58.272 Nvme0n1p1 : 5.01 7609.44 29.72 0.00 0.00 16775.28 1936.29 27644.28 00:28:58.272 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:58.272 Verification LBA range: start 0x0 length 0x4ff7f 00:28:58.272 Nvme0n1p2 : 5.02 7600.32 29.69 0.00 0.00 16778.32 2770.39 30980.65 00:28:58.272 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:58.272 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:28:58.272 Nvme0n1p2 : 5.02 7612.31 29.74 0.00 0.00 16755.93 558.55 26929.34 00:28:58.272 =================================================================================================================== 00:28:58.272 Total : 30434.86 118.89 0.00 0.00 16769.87 558.55 30980.65 00:29:01.557 00:29:01.557 real 0m8.993s 00:29:01.557 user 0m16.946s 00:29:01.557 sys 0m0.237s 00:29:01.557 05:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.557 05:26:20 -- common/autotest_common.sh@10 -- # set +x 00:29:01.557 ************************************ 00:29:01.557 END TEST bdev_verify 00:29:01.557 ************************************ 00:29:01.557 05:26:20 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:01.557 05:26:20 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:01.557 05:26:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.557 05:26:20 -- common/autotest_common.sh@10 -- # set +x 00:29:01.557 ************************************ 00:29:01.557 START TEST bdev_verify_big_io 00:29:01.557 ************************************ 00:29:01.557 05:26:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:01.557 [2024-07-26 05:26:20.252832] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:01.557 [2024-07-26 05:26:20.253026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92195 ] 00:29:01.557 [2024-07-26 05:26:20.422150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:01.557 [2024-07-26 05:26:20.572993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.557 [2024-07-26 05:26:20.573040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.125 Running I/O for 5 seconds... 00:29:07.397 00:29:07.397 Latency(us) 00:29:07.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.397 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:07.397 Verification LBA range: start 0x0 length 0x4ff8 00:29:07.397 Nvme0n1p1 : 5.08 897.54 56.10 0.00 0.00 141058.51 2681.02 201135.94 00:29:07.397 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:07.397 Verification LBA range: start 0x4ff8 length 0x4ff8 00:29:07.397 Nvme0n1p1 : 5.09 955.66 59.73 0.00 0.00 132291.27 10307.03 222107.46 00:29:07.397 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:07.397 Verification LBA range: start 0x0 length 0x4ff7 00:29:07.397 Nvme0n1p2 : 5.09 903.88 56.49 0.00 0.00 138511.27 700.04 167772.16 00:29:07.397 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:07.397 Verification LBA range: start 0x4ff7 length 0x4ff7 00:29:07.397 Nvme0n1p2 : 5.09 971.07 60.69 0.00 0.00 128550.79 655.36 178257.92 00:29:07.397 =================================================================================================================== 00:29:07.397 Total : 3728.16 233.01 0.00 0.00 134932.53 655.36 222107.46 00:29:08.335 00:29:08.335 real 0m7.189s 00:29:08.335 user 0m13.307s 00:29:08.335 sys 0m0.230s 00:29:08.335 05:26:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:08.335 05:26:27 -- common/autotest_common.sh@10 -- # set +x 00:29:08.335 ************************************ 00:29:08.335 END TEST bdev_verify_big_io 00:29:08.335 ************************************ 00:29:08.335 05:26:27 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:08.335 05:26:27 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:08.335 05:26:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:08.335 05:26:27 -- common/autotest_common.sh@10 -- # set +x 00:29:08.335 ************************************ 00:29:08.335 START TEST bdev_write_zeroes 00:29:08.335 ************************************ 00:29:08.335 05:26:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:08.594 [2024-07-26 05:26:27.475054] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:08.594 [2024-07-26 05:26:27.475185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92290 ] 00:29:08.594 [2024-07-26 05:26:27.624102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.853 [2024-07-26 05:26:27.773832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.112 Running I/O for 1 seconds... 00:29:10.046 00:29:10.046 Latency(us) 00:29:10.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.046 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:10.046 Nvme0n1p1 : 1.00 23641.77 92.35 0.00 0.00 5402.50 2904.44 16205.27 00:29:10.046 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:10.046 Nvme0n1p2 : 1.01 23617.94 92.26 0.00 0.00 5400.17 2740.60 9413.35 00:29:10.046 =================================================================================================================== 00:29:10.046 Total : 47259.71 184.61 0.00 0.00 5401.33 2740.60 16205.27 00:29:10.983 00:29:10.983 real 0m2.628s 00:29:10.983 user 0m2.347s 00:29:10.983 sys 0m0.181s 00:29:10.983 05:26:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:10.983 ************************************ 00:29:10.983 END TEST bdev_write_zeroes 00:29:10.983 ************************************ 00:29:10.983 05:26:30 -- common/autotest_common.sh@10 -- # set +x 00:29:11.242 05:26:30 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:11.242 05:26:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:11.242 05:26:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:11.242 05:26:30 -- common/autotest_common.sh@10 -- # set +x 00:29:11.242 ************************************ 00:29:11.242 START TEST bdev_json_nonenclosed 00:29:11.242 ************************************ 00:29:11.242 05:26:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:11.242 [2024-07-26 05:26:30.169127] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:11.242 [2024-07-26 05:26:30.169486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92332 ] 00:29:11.242 [2024-07-26 05:26:30.338067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.500 [2024-07-26 05:26:30.496578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.501 [2024-07-26 05:26:30.496746] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:11.501 [2024-07-26 05:26:30.496771] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:11.759 ************************************ 00:29:11.759 END TEST bdev_json_nonenclosed 00:29:11.759 ************************************ 00:29:11.759 00:29:11.759 real 0m0.716s 00:29:11.759 user 0m0.491s 00:29:11.759 sys 0m0.125s 00:29:11.759 05:26:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.759 05:26:30 -- common/autotest_common.sh@10 -- # set +x 00:29:12.018 05:26:30 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:12.018 05:26:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:12.018 05:26:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:12.018 05:26:30 -- common/autotest_common.sh@10 -- # set +x 00:29:12.018 ************************************ 00:29:12.018 START TEST bdev_json_nonarray 00:29:12.018 ************************************ 00:29:12.018 05:26:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:12.018 [2024-07-26 05:26:30.943442] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:12.018 [2024-07-26 05:26:30.943607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92358 ] 00:29:12.018 [2024-07-26 05:26:31.113195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.276 [2024-07-26 05:26:31.266731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.276 [2024-07-26 05:26:31.266918] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:12.276 [2024-07-26 05:26:31.266945] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:12.534 ************************************ 00:29:12.534 END TEST bdev_json_nonarray 00:29:12.534 ************************************ 00:29:12.534 00:29:12.534 real 0m0.727s 00:29:12.534 user 0m0.500s 00:29:12.534 sys 0m0.126s 00:29:12.535 05:26:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.535 05:26:31 -- common/autotest_common.sh@10 -- # set +x 00:29:12.793 05:26:31 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:29:12.793 05:26:31 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:29:12.793 05:26:31 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:29:12.793 05:26:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:12.793 05:26:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:12.793 05:26:31 -- common/autotest_common.sh@10 -- # set +x 00:29:12.793 ************************************ 00:29:12.793 START TEST bdev_gpt_uuid 00:29:12.793 ************************************ 00:29:12.793 05:26:31 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:29:12.793 05:26:31 -- bdev/blockdev.sh@612 -- # local bdev 00:29:12.793 05:26:31 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:29:12.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.793 05:26:31 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=92389 00:29:12.793 05:26:31 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:12.793 05:26:31 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:12.793 05:26:31 -- bdev/blockdev.sh@47 -- # waitforlisten 92389 00:29:12.793 05:26:31 -- common/autotest_common.sh@819 -- # '[' -z 92389 ']' 00:29:12.793 05:26:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.793 05:26:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:12.793 05:26:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.793 05:26:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:12.793 05:26:31 -- common/autotest_common.sh@10 -- # set +x 00:29:12.793 [2024-07-26 05:26:31.734529] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:12.793 [2024-07-26 05:26:31.734906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92389 ] 00:29:13.051 [2024-07-26 05:26:31.905466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.051 [2024-07-26 05:26:32.053156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:13.051 [2024-07-26 05:26:32.053653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.619 05:26:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:13.619 05:26:32 -- common/autotest_common.sh@852 -- # return 0 00:29:13.619 05:26:32 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:13.619 05:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.619 05:26:32 -- common/autotest_common.sh@10 -- # set +x 00:29:13.878 Some configs were skipped because the RPC state that can call them passed over. 00:29:13.878 05:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.878 05:26:32 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:29:13.878 05:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.878 05:26:32 -- common/autotest_common.sh@10 -- # set +x 00:29:13.878 05:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.878 05:26:32 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:29:13.878 05:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.878 05:26:32 -- common/autotest_common.sh@10 -- # set +x 00:29:13.878 05:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.878 05:26:32 -- bdev/blockdev.sh@619 -- # bdev='[ 00:29:13.878 { 00:29:13.878 "name": "Nvme0n1p1", 00:29:13.878 "aliases": [ 00:29:13.878 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:29:13.878 ], 00:29:13.878 "product_name": "GPT Disk", 00:29:13.878 "block_size": 4096, 00:29:13.878 "num_blocks": 655104, 00:29:13.878 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:29:13.878 "assigned_rate_limits": { 00:29:13.878 "rw_ios_per_sec": 0, 00:29:13.878 "rw_mbytes_per_sec": 0, 00:29:13.878 "r_mbytes_per_sec": 0, 00:29:13.878 "w_mbytes_per_sec": 0 00:29:13.878 }, 00:29:13.878 "claimed": false, 00:29:13.878 "zoned": false, 00:29:13.878 "supported_io_types": { 00:29:13.878 "read": true, 00:29:13.878 "write": true, 00:29:13.878 "unmap": true, 00:29:13.878 "write_zeroes": true, 00:29:13.878 "flush": true, 00:29:13.878 "reset": true, 00:29:13.878 "compare": true, 00:29:13.878 "compare_and_write": false, 00:29:13.878 "abort": true, 00:29:13.878 "nvme_admin": false, 00:29:13.878 "nvme_io": false 00:29:13.878 }, 00:29:13.878 "driver_specific": { 00:29:13.878 "gpt": { 00:29:13.878 "base_bdev": "Nvme0n1", 00:29:13.878 "offset_blocks": 256, 00:29:13.878 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:29:13.878 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:29:13.878 "partition_name": "SPDK_TEST_first" 00:29:13.878 } 00:29:13.878 } 00:29:13.878 } 00:29:13.878 ]' 00:29:13.878 05:26:32 -- bdev/blockdev.sh@620 -- # jq -r length 00:29:13.878 05:26:32 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:29:13.878 05:26:32 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:29:13.878 05:26:32 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:29:13.878 05:26:32 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:29:13.878 05:26:32 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:29:13.878 05:26:32 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:29:13.878 05:26:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.878 05:26:32 -- common/autotest_common.sh@10 -- # set +x 00:29:13.878 05:26:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.878 05:26:32 -- bdev/blockdev.sh@624 -- # bdev='[ 00:29:13.878 { 00:29:13.878 "name": "Nvme0n1p2", 00:29:13.878 "aliases": [ 00:29:13.878 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:29:13.878 ], 00:29:13.878 "product_name": "GPT Disk", 00:29:13.878 "block_size": 4096, 00:29:13.878 "num_blocks": 655103, 00:29:13.878 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:29:13.878 "assigned_rate_limits": { 00:29:13.878 "rw_ios_per_sec": 0, 00:29:13.878 "rw_mbytes_per_sec": 0, 00:29:13.878 "r_mbytes_per_sec": 0, 00:29:13.878 "w_mbytes_per_sec": 0 00:29:13.878 }, 00:29:13.878 "claimed": false, 00:29:13.878 "zoned": false, 00:29:13.878 "supported_io_types": { 00:29:13.878 "read": true, 00:29:13.878 "write": true, 00:29:13.878 "unmap": true, 00:29:13.878 "write_zeroes": true, 00:29:13.878 "flush": true, 00:29:13.878 "reset": true, 00:29:13.878 "compare": true, 00:29:13.878 "compare_and_write": false, 00:29:13.878 "abort": true, 00:29:13.878 "nvme_admin": false, 00:29:13.878 "nvme_io": false 00:29:13.878 }, 00:29:13.878 "driver_specific": { 00:29:13.878 "gpt": { 00:29:13.878 "base_bdev": "Nvme0n1", 00:29:13.878 "offset_blocks": 655360, 00:29:13.878 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:29:13.878 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:29:13.878 "partition_name": "SPDK_TEST_second" 00:29:13.878 } 00:29:13.878 } 00:29:13.878 } 00:29:13.878 ]' 00:29:13.878 05:26:32 -- bdev/blockdev.sh@625 -- # jq -r length 00:29:13.878 05:26:32 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:29:13.878 05:26:32 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:29:13.878 05:26:32 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:29:13.878 05:26:32 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:29:13.878 05:26:32 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:29:13.878 05:26:32 -- bdev/blockdev.sh@629 -- # killprocess 92389 00:29:13.878 05:26:32 -- common/autotest_common.sh@926 -- # '[' -z 92389 ']' 00:29:13.878 05:26:32 -- common/autotest_common.sh@930 -- # kill -0 92389 00:29:13.878 05:26:32 -- common/autotest_common.sh@931 -- # uname 00:29:13.878 05:26:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:13.878 05:26:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92389 00:29:13.878 killing process with pid 92389 00:29:13.878 05:26:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:13.878 05:26:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:13.878 05:26:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92389' 00:29:13.878 05:26:32 -- common/autotest_common.sh@945 -- # kill 92389 00:29:13.878 05:26:32 -- common/autotest_common.sh@950 -- # wait 92389 00:29:15.785 ************************************ 00:29:15.785 END TEST bdev_gpt_uuid 00:29:15.785 ************************************ 00:29:15.785 00:29:15.785 real 0m2.946s 00:29:15.785 user 0m3.025s 00:29:15.785 sys 0m0.408s 00:29:15.785 05:26:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.785 05:26:34 -- common/autotest_common.sh@10 -- # set +x 00:29:15.785 05:26:34 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:29:15.785 05:26:34 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:29:15.785 05:26:34 -- bdev/blockdev.sh@809 -- # cleanup 00:29:15.785 05:26:34 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:15.785 05:26:34 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:15.785 05:26:34 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:29:15.785 05:26:34 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:29:15.785 05:26:34 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:29:15.785 05:26:34 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:16.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:29:16.043 Waiting for block devices as requested 00:29:16.043 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:16.043 05:26:35 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:29:16.043 05:26:35 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:29:16.301 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:29:16.301 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:29:16.301 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:29:16.301 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:29:16.301 05:26:35 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:29:16.301 00:29:16.301 real 0m41.065s 00:29:16.301 user 0m59.581s 00:29:16.301 sys 0m5.555s 00:29:16.301 05:26:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:16.301 ************************************ 00:29:16.301 END TEST blockdev_nvme_gpt 00:29:16.301 05:26:35 -- common/autotest_common.sh@10 -- # set +x 00:29:16.301 ************************************ 00:29:16.559 05:26:35 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:29:16.559 05:26:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:16.560 05:26:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:16.560 05:26:35 -- common/autotest_common.sh@10 -- # set +x 00:29:16.560 ************************************ 00:29:16.560 START TEST nvme 00:29:16.560 ************************************ 00:29:16.560 05:26:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:29:16.560 * Looking for test storage... 00:29:16.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:16.560 05:26:35 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:16.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:29:17.076 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:29:17.675 05:26:36 -- nvme/nvme.sh@79 -- # uname 00:29:17.675 05:26:36 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:29:17.675 05:26:36 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:29:17.675 05:26:36 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:29:17.675 05:26:36 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:29:17.675 05:26:36 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:29:17.675 05:26:36 -- common/autotest_common.sh@1045 -- # echo 0 00:29:17.675 05:26:36 -- common/autotest_common.sh@1047 -- # stubpid=92756 00:29:17.675 05:26:36 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:29:17.675 Waiting for stub to ready for secondary processes... 00:29:17.675 05:26:36 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:29:17.675 05:26:36 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:29:17.675 05:26:36 -- common/autotest_common.sh@1051 -- # [[ -e /proc/92756 ]] 00:29:17.675 05:26:36 -- common/autotest_common.sh@1052 -- # sleep 1s 00:29:17.675 [2024-07-26 05:26:36.621347] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:17.675 [2024-07-26 05:26:36.621507] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.612 [2024-07-26 05:26:37.426792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:18.612 05:26:37 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:29:18.612 05:26:37 -- common/autotest_common.sh@1051 -- # [[ -e /proc/92756 ]] 00:29:18.613 05:26:37 -- common/autotest_common.sh@1052 -- # sleep 1s 00:29:18.613 [2024-07-26 05:26:37.620156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.613 [2024-07-26 05:26:37.620270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.613 [2024-07-26 05:26:37.620280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.613 [2024-07-26 05:26:37.634483] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:29:18.613 [2024-07-26 05:26:37.643809] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:29:18.613 [2024-07-26 05:26:37.644037] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:29:19.547 05:26:38 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:29:19.547 done. 00:29:19.547 05:26:38 -- common/autotest_common.sh@1054 -- # echo done. 00:29:19.547 05:26:38 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:29:19.547 05:26:38 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:29:19.547 05:26:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:19.547 05:26:38 -- common/autotest_common.sh@10 -- # set +x 00:29:19.547 ************************************ 00:29:19.547 START TEST nvme_reset 00:29:19.547 ************************************ 00:29:19.547 05:26:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:29:19.805 Initializing NVMe Controllers 00:29:19.805 Skipping QEMU NVMe SSD at 0000:00:06.0 00:29:19.805 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:29:19.805 00:29:19.805 real 0m0.296s 00:29:19.805 user 0m0.106s 00:29:19.805 sys 0m0.148s 00:29:19.805 05:26:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.805 05:26:38 -- common/autotest_common.sh@10 -- # set +x 00:29:19.805 ************************************ 00:29:19.805 END TEST nvme_reset 00:29:19.805 ************************************ 00:29:20.064 05:26:38 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:29:20.064 05:26:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:20.064 05:26:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:20.064 05:26:38 -- common/autotest_common.sh@10 -- # set +x 00:29:20.064 ************************************ 00:29:20.064 START TEST nvme_identify 00:29:20.064 ************************************ 00:29:20.064 05:26:38 -- common/autotest_common.sh@1104 -- # nvme_identify 00:29:20.064 05:26:38 -- nvme/nvme.sh@12 -- # bdfs=() 00:29:20.064 05:26:38 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:29:20.064 05:26:38 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:29:20.064 05:26:38 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:29:20.064 05:26:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:20.064 05:26:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:20.064 05:26:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:20.064 05:26:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:20.064 05:26:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:20.064 05:26:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:20.064 05:26:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:20.064 05:26:38 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:29:20.323 [2024-07-26 05:26:39.250878] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 92785 terminated unexpected 00:29:20.323 ===================================================== 00:29:20.323 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:20.323 ===================================================== 00:29:20.323 Controller Capabilities/Features 00:29:20.323 ================================ 00:29:20.323 Vendor ID: 1b36 00:29:20.323 Subsystem Vendor ID: 1af4 00:29:20.323 Serial Number: 12340 00:29:20.323 Model Number: QEMU NVMe Ctrl 00:29:20.323 Firmware Version: 8.0.0 00:29:20.323 Recommended Arb Burst: 6 00:29:20.323 IEEE OUI Identifier: 00 54 52 00:29:20.323 Multi-path I/O 00:29:20.323 May have multiple subsystem ports: No 00:29:20.323 May have multiple controllers: No 00:29:20.323 Associated with SR-IOV VF: No 00:29:20.323 Max Data Transfer Size: 524288 00:29:20.323 Max Number of Namespaces: 256 00:29:20.323 Max Number of I/O Queues: 64 00:29:20.323 NVMe Specification Version (VS): 1.4 00:29:20.323 NVMe Specification Version (Identify): 1.4 00:29:20.323 Maximum Queue Entries: 2048 00:29:20.323 Contiguous Queues Required: Yes 00:29:20.323 Arbitration Mechanisms Supported 00:29:20.323 Weighted Round Robin: Not Supported 00:29:20.323 Vendor Specific: Not Supported 00:29:20.323 Reset Timeout: 7500 ms 00:29:20.323 Doorbell Stride: 4 bytes 00:29:20.323 NVM Subsystem Reset: Not Supported 00:29:20.323 Command Sets Supported 00:29:20.323 NVM Command Set: Supported 00:29:20.323 Boot Partition: Not Supported 00:29:20.323 Memory Page Size Minimum: 4096 bytes 00:29:20.323 Memory Page Size Maximum: 65536 bytes 00:29:20.323 Persistent Memory Region: Not Supported 00:29:20.323 Optional Asynchronous Events Supported 00:29:20.323 Namespace Attribute Notices: Supported 00:29:20.323 Firmware Activation Notices: Not Supported 00:29:20.323 ANA Change Notices: Not Supported 00:29:20.323 PLE Aggregate Log Change Notices: Not Supported 00:29:20.323 LBA Status Info Alert Notices: Not Supported 00:29:20.323 EGE Aggregate Log Change Notices: Not Supported 00:29:20.323 Normal NVM Subsystem Shutdown event: Not Supported 00:29:20.323 Zone Descriptor Change Notices: Not Supported 00:29:20.323 Discovery Log Change Notices: Not Supported 00:29:20.323 Controller Attributes 00:29:20.323 128-bit Host Identifier: Not Supported 00:29:20.323 Non-Operational Permissive Mode: Not Supported 00:29:20.323 NVM Sets: Not Supported 00:29:20.323 Read Recovery Levels: Not Supported 00:29:20.323 Endurance Groups: Not Supported 00:29:20.323 Predictable Latency Mode: Not Supported 00:29:20.323 Traffic Based Keep ALive: Not Supported 00:29:20.323 Namespace Granularity: Not Supported 00:29:20.323 SQ Associations: Not Supported 00:29:20.323 UUID List: Not Supported 00:29:20.323 Multi-Domain Subsystem: Not Supported 00:29:20.323 Fixed Capacity Management: Not Supported 00:29:20.323 Variable Capacity Management: Not Supported 00:29:20.323 Delete Endurance Group: Not Supported 00:29:20.323 Delete NVM Set: Not Supported 00:29:20.323 Extended LBA Formats Supported: Supported 00:29:20.323 Flexible Data Placement Supported: Not Supported 00:29:20.323 00:29:20.323 Controller Memory Buffer Support 00:29:20.323 ================================ 00:29:20.323 Supported: No 00:29:20.323 00:29:20.323 Persistent Memory Region Support 00:29:20.323 ================================ 00:29:20.323 Supported: No 00:29:20.323 00:29:20.323 Admin Command Set Attributes 00:29:20.323 ============================ 00:29:20.323 Security Send/Receive: Not Supported 00:29:20.323 Format NVM: Supported 00:29:20.323 Firmware Activate/Download: Not Supported 00:29:20.323 Namespace Management: Supported 00:29:20.323 Device Self-Test: Not Supported 00:29:20.323 Directives: Supported 00:29:20.323 NVMe-MI: Not Supported 00:29:20.323 Virtualization Management: Not Supported 00:29:20.323 Doorbell Buffer Config: Supported 00:29:20.323 Get LBA Status Capability: Not Supported 00:29:20.323 Command & Feature Lockdown Capability: Not Supported 00:29:20.323 Abort Command Limit: 4 00:29:20.324 Async Event Request Limit: 4 00:29:20.324 Number of Firmware Slots: N/A 00:29:20.324 Firmware Slot 1 Read-Only: N/A 00:29:20.324 Firmware Activation Without Reset: N/A 00:29:20.324 Multiple Update Detection Support: N/A 00:29:20.324 Firmware Update Granularity: No Information Provided 00:29:20.324 Per-Namespace SMART Log: Yes 00:29:20.324 Asymmetric Namespace Access Log Page: Not Supported 00:29:20.324 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:29:20.324 Command Effects Log Page: Supported 00:29:20.324 Get Log Page Extended Data: Supported 00:29:20.324 Telemetry Log Pages: Not Supported 00:29:20.324 Persistent Event Log Pages: Not Supported 00:29:20.324 Supported Log Pages Log Page: May Support 00:29:20.324 Commands Supported & Effects Log Page: Not Supported 00:29:20.324 Feature Identifiers & Effects Log Page:May Support 00:29:20.324 NVMe-MI Commands & Effects Log Page: May Support 00:29:20.324 Data Area 4 for Telemetry Log: Not Supported 00:29:20.324 Error Log Page Entries Supported: 1 00:29:20.324 Keep Alive: Not Supported 00:29:20.324 00:29:20.324 NVM Command Set Attributes 00:29:20.324 ========================== 00:29:20.324 Submission Queue Entry Size 00:29:20.324 Max: 64 00:29:20.324 Min: 64 00:29:20.324 Completion Queue Entry Size 00:29:20.324 Max: 16 00:29:20.324 Min: 16 00:29:20.324 Number of Namespaces: 256 00:29:20.324 Compare Command: Supported 00:29:20.324 Write Uncorrectable Command: Not Supported 00:29:20.324 Dataset Management Command: Supported 00:29:20.324 Write Zeroes Command: Supported 00:29:20.324 Set Features Save Field: Supported 00:29:20.324 Reservations: Not Supported 00:29:20.324 Timestamp: Supported 00:29:20.324 Copy: Supported 00:29:20.324 Volatile Write Cache: Present 00:29:20.324 Atomic Write Unit (Normal): 1 00:29:20.324 Atomic Write Unit (PFail): 1 00:29:20.324 Atomic Compare & Write Unit: 1 00:29:20.324 Fused Compare & Write: Not Supported 00:29:20.324 Scatter-Gather List 00:29:20.324 SGL Command Set: Supported 00:29:20.324 SGL Keyed: Not Supported 00:29:20.324 SGL Bit Bucket Descriptor: Not Supported 00:29:20.324 SGL Metadata Pointer: Not Supported 00:29:20.324 Oversized SGL: Not Supported 00:29:20.324 SGL Metadata Address: Not Supported 00:29:20.324 SGL Offset: Not Supported 00:29:20.324 Transport SGL Data Block: Not Supported 00:29:20.324 Replay Protected Memory Block: Not Supported 00:29:20.324 00:29:20.324 Firmware Slot Information 00:29:20.324 ========================= 00:29:20.324 Active slot: 1 00:29:20.324 Slot 1 Firmware Revision: 1.0 00:29:20.324 00:29:20.324 00:29:20.324 Commands Supported and Effects 00:29:20.324 ============================== 00:29:20.324 Admin Commands 00:29:20.324 -------------- 00:29:20.324 Delete I/O Submission Queue (00h): Supported 00:29:20.324 Create I/O Submission Queue (01h): Supported 00:29:20.324 Get Log Page (02h): Supported 00:29:20.324 Delete I/O Completion Queue (04h): Supported 00:29:20.324 Create I/O Completion Queue (05h): Supported 00:29:20.324 Identify (06h): Supported 00:29:20.324 Abort (08h): Supported 00:29:20.324 Set Features (09h): Supported 00:29:20.324 Get Features (0Ah): Supported 00:29:20.324 Asynchronous Event Request (0Ch): Supported 00:29:20.324 Namespace Attachment (15h): Supported NS-Inventory-Change 00:29:20.324 Directive Send (19h): Supported 00:29:20.324 Directive Receive (1Ah): Supported 00:29:20.324 Virtualization Management (1Ch): Supported 00:29:20.324 Doorbell Buffer Config (7Ch): Supported 00:29:20.324 Format NVM (80h): Supported LBA-Change 00:29:20.324 I/O Commands 00:29:20.324 ------------ 00:29:20.324 Flush (00h): Supported LBA-Change 00:29:20.324 Write (01h): Supported LBA-Change 00:29:20.324 Read (02h): Supported 00:29:20.324 Compare (05h): Supported 00:29:20.324 Write Zeroes (08h): Supported LBA-Change 00:29:20.324 Dataset Management (09h): Supported LBA-Change 00:29:20.324 Unknown (0Ch): Supported 00:29:20.324 Unknown (12h): Supported 00:29:20.324 Copy (19h): Supported LBA-Change 00:29:20.324 Unknown (1Dh): Supported LBA-Change 00:29:20.324 00:29:20.324 Error Log 00:29:20.324 ========= 00:29:20.324 00:29:20.324 Arbitration 00:29:20.324 =========== 00:29:20.324 Arbitration Burst: no limit 00:29:20.324 00:29:20.324 Power Management 00:29:20.324 ================ 00:29:20.324 Number of Power States: 1 00:29:20.324 Current Power State: Power State #0 00:29:20.324 Power State #0: 00:29:20.324 Max Power: 25.00 W 00:29:20.324 Non-Operational State: Operational 00:29:20.324 Entry Latency: 16 microseconds 00:29:20.324 Exit Latency: 4 microseconds 00:29:20.324 Relative Read Throughput: 0 00:29:20.324 Relative Read Latency: 0 00:29:20.324 Relative Write Throughput: 0 00:29:20.324 Relative Write Latency: 0 00:29:20.324 Idle Power: Not Reported 00:29:20.324 Active Power: Not Reported 00:29:20.324 Non-Operational Permissive Mode: Not Supported 00:29:20.324 00:29:20.324 Health Information 00:29:20.324 ================== 00:29:20.324 Critical Warnings: 00:29:20.324 Available Spare Space: OK 00:29:20.324 Temperature: OK 00:29:20.324 Device Reliability: OK 00:29:20.324 Read Only: No 00:29:20.324 Volatile Memory Backup: OK 00:29:20.324 Current Temperature: 323 Kelvin (50 Celsius) 00:29:20.324 Temperature Threshold: 343 Kelvin (70 Celsius) 00:29:20.324 Available Spare: 0% 00:29:20.324 Available Spare Threshold: 0% 00:29:20.324 Life Percentage Used: 0% 00:29:20.324 Data Units Read: 7801 00:29:20.324 Data Units Written: 3798 00:29:20.324 Host Read Commands: 367419 00:29:20.324 Host Write Commands: 198856 00:29:20.324 Controller Busy Time: 0 minutes 00:29:20.324 Power Cycles: 0 00:29:20.324 Power On Hours: 0 hours 00:29:20.324 Unsafe Shutdowns: 0 00:29:20.324 Unrecoverable Media Errors: 0 00:29:20.324 Lifetime Error Log Entries: 0 00:29:20.324 Warning Temperature Time: 0 minutes 00:29:20.324 Critical Temperature Time: 0 minutes 00:29:20.324 00:29:20.324 Number of Queues 00:29:20.324 ================ 00:29:20.324 Number of I/O Submission Queues: 64 00:29:20.324 Number of I/O Completion Queues: 64 00:29:20.324 00:29:20.324 ZNS Specific Controller Data 00:29:20.324 ============================ 00:29:20.324 Zone Append Size Limit: 0 00:29:20.324 00:29:20.324 00:29:20.324 Active Namespaces 00:29:20.324 ================= 00:29:20.324 Namespace ID:1 00:29:20.324 Error Recovery Timeout: Unlimited 00:29:20.324 Command Set Identifier: NVM (00h) 00:29:20.324 Deallocate: Supported 00:29:20.324 Deallocated/Unwritten Error: Supported 00:29:20.324 Deallocated Read Value: All 0x00 00:29:20.324 Deallocate in Write Zeroes: Not Supported 00:29:20.324 Deallocated Guard Field: 0xFFFF 00:29:20.324 Flush: Supported 00:29:20.324 Reservation: Not Supported 00:29:20.324 Namespace Sharing Capabilities: Private 00:29:20.324 Size (in LBAs): 1310720 (5GiB) 00:29:20.324 Capacity (in LBAs): 1310720 (5GiB) 00:29:20.324 Utilization (in LBAs): 1310720 (5GiB) 00:29:20.324 Thin Provisioning: Not Supported 00:29:20.324 Per-NS Atomic Units: No 00:29:20.324 Maximum Single Source Range Length: 128 00:29:20.324 Maximum Copy Length: 128 00:29:20.324 Maximum Source Range Count: 128 00:29:20.324 NGUID/EUI64 Never Reused: No 00:29:20.324 Namespace Write Protected: No 00:29:20.324 Number of LBA Formats: 8 00:29:20.324 Current LBA Format: LBA Format #04 00:29:20.324 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:20.324 LBA Format #01: Data Size: 512 Metadata Size: 8 00:29:20.324 LBA Format #02: Data Size: 512 Metadata Size: 16 00:29:20.324 LBA Format #03: Data Size: 512 Metadata Size: 64 00:29:20.324 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:29:20.324 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:29:20.324 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:29:20.324 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:29:20.324 00:29:20.324 05:26:39 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:29:20.324 05:26:39 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:29:20.583 ===================================================== 00:29:20.583 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:20.583 ===================================================== 00:29:20.583 Controller Capabilities/Features 00:29:20.583 ================================ 00:29:20.583 Vendor ID: 1b36 00:29:20.583 Subsystem Vendor ID: 1af4 00:29:20.583 Serial Number: 12340 00:29:20.583 Model Number: QEMU NVMe Ctrl 00:29:20.583 Firmware Version: 8.0.0 00:29:20.583 Recommended Arb Burst: 6 00:29:20.583 IEEE OUI Identifier: 00 54 52 00:29:20.583 Multi-path I/O 00:29:20.583 May have multiple subsystem ports: No 00:29:20.583 May have multiple controllers: No 00:29:20.583 Associated with SR-IOV VF: No 00:29:20.583 Max Data Transfer Size: 524288 00:29:20.583 Max Number of Namespaces: 256 00:29:20.583 Max Number of I/O Queues: 64 00:29:20.583 NVMe Specification Version (VS): 1.4 00:29:20.583 NVMe Specification Version (Identify): 1.4 00:29:20.583 Maximum Queue Entries: 2048 00:29:20.583 Contiguous Queues Required: Yes 00:29:20.583 Arbitration Mechanisms Supported 00:29:20.583 Weighted Round Robin: Not Supported 00:29:20.584 Vendor Specific: Not Supported 00:29:20.584 Reset Timeout: 7500 ms 00:29:20.584 Doorbell Stride: 4 bytes 00:29:20.584 NVM Subsystem Reset: Not Supported 00:29:20.584 Command Sets Supported 00:29:20.584 NVM Command Set: Supported 00:29:20.584 Boot Partition: Not Supported 00:29:20.584 Memory Page Size Minimum: 4096 bytes 00:29:20.584 Memory Page Size Maximum: 65536 bytes 00:29:20.584 Persistent Memory Region: Not Supported 00:29:20.584 Optional Asynchronous Events Supported 00:29:20.584 Namespace Attribute Notices: Supported 00:29:20.584 Firmware Activation Notices: Not Supported 00:29:20.584 ANA Change Notices: Not Supported 00:29:20.584 PLE Aggregate Log Change Notices: Not Supported 00:29:20.584 LBA Status Info Alert Notices: Not Supported 00:29:20.584 EGE Aggregate Log Change Notices: Not Supported 00:29:20.584 Normal NVM Subsystem Shutdown event: Not Supported 00:29:20.584 Zone Descriptor Change Notices: Not Supported 00:29:20.584 Discovery Log Change Notices: Not Supported 00:29:20.584 Controller Attributes 00:29:20.584 128-bit Host Identifier: Not Supported 00:29:20.584 Non-Operational Permissive Mode: Not Supported 00:29:20.584 NVM Sets: Not Supported 00:29:20.584 Read Recovery Levels: Not Supported 00:29:20.584 Endurance Groups: Not Supported 00:29:20.584 Predictable Latency Mode: Not Supported 00:29:20.584 Traffic Based Keep ALive: Not Supported 00:29:20.584 Namespace Granularity: Not Supported 00:29:20.584 SQ Associations: Not Supported 00:29:20.584 UUID List: Not Supported 00:29:20.584 Multi-Domain Subsystem: Not Supported 00:29:20.584 Fixed Capacity Management: Not Supported 00:29:20.584 Variable Capacity Management: Not Supported 00:29:20.584 Delete Endurance Group: Not Supported 00:29:20.584 Delete NVM Set: Not Supported 00:29:20.584 Extended LBA Formats Supported: Supported 00:29:20.584 Flexible Data Placement Supported: Not Supported 00:29:20.584 00:29:20.584 Controller Memory Buffer Support 00:29:20.584 ================================ 00:29:20.584 Supported: No 00:29:20.584 00:29:20.584 Persistent Memory Region Support 00:29:20.584 ================================ 00:29:20.584 Supported: No 00:29:20.584 00:29:20.584 Admin Command Set Attributes 00:29:20.584 ============================ 00:29:20.584 Security Send/Receive: Not Supported 00:29:20.584 Format NVM: Supported 00:29:20.584 Firmware Activate/Download: Not Supported 00:29:20.584 Namespace Management: Supported 00:29:20.584 Device Self-Test: Not Supported 00:29:20.584 Directives: Supported 00:29:20.584 NVMe-MI: Not Supported 00:29:20.584 Virtualization Management: Not Supported 00:29:20.584 Doorbell Buffer Config: Supported 00:29:20.584 Get LBA Status Capability: Not Supported 00:29:20.584 Command & Feature Lockdown Capability: Not Supported 00:29:20.584 Abort Command Limit: 4 00:29:20.584 Async Event Request Limit: 4 00:29:20.584 Number of Firmware Slots: N/A 00:29:20.584 Firmware Slot 1 Read-Only: N/A 00:29:20.584 Firmware Activation Without Reset: N/A 00:29:20.584 Multiple Update Detection Support: N/A 00:29:20.584 Firmware Update Granularity: No Information Provided 00:29:20.584 Per-Namespace SMART Log: Yes 00:29:20.584 Asymmetric Namespace Access Log Page: Not Supported 00:29:20.584 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:29:20.584 Command Effects Log Page: Supported 00:29:20.584 Get Log Page Extended Data: Supported 00:29:20.584 Telemetry Log Pages: Not Supported 00:29:20.584 Persistent Event Log Pages: Not Supported 00:29:20.584 Supported Log Pages Log Page: May Support 00:29:20.584 Commands Supported & Effects Log Page: Not Supported 00:29:20.584 Feature Identifiers & Effects Log Page:May Support 00:29:20.584 NVMe-MI Commands & Effects Log Page: May Support 00:29:20.584 Data Area 4 for Telemetry Log: Not Supported 00:29:20.584 Error Log Page Entries Supported: 1 00:29:20.584 Keep Alive: Not Supported 00:29:20.584 00:29:20.584 NVM Command Set Attributes 00:29:20.584 ========================== 00:29:20.584 Submission Queue Entry Size 00:29:20.584 Max: 64 00:29:20.584 Min: 64 00:29:20.584 Completion Queue Entry Size 00:29:20.584 Max: 16 00:29:20.584 Min: 16 00:29:20.584 Number of Namespaces: 256 00:29:20.584 Compare Command: Supported 00:29:20.584 Write Uncorrectable Command: Not Supported 00:29:20.584 Dataset Management Command: Supported 00:29:20.584 Write Zeroes Command: Supported 00:29:20.584 Set Features Save Field: Supported 00:29:20.584 Reservations: Not Supported 00:29:20.584 Timestamp: Supported 00:29:20.584 Copy: Supported 00:29:20.584 Volatile Write Cache: Present 00:29:20.584 Atomic Write Unit (Normal): 1 00:29:20.584 Atomic Write Unit (PFail): 1 00:29:20.584 Atomic Compare & Write Unit: 1 00:29:20.584 Fused Compare & Write: Not Supported 00:29:20.584 Scatter-Gather List 00:29:20.584 SGL Command Set: Supported 00:29:20.584 SGL Keyed: Not Supported 00:29:20.584 SGL Bit Bucket Descriptor: Not Supported 00:29:20.584 SGL Metadata Pointer: Not Supported 00:29:20.584 Oversized SGL: Not Supported 00:29:20.584 SGL Metadata Address: Not Supported 00:29:20.584 SGL Offset: Not Supported 00:29:20.584 Transport SGL Data Block: Not Supported 00:29:20.584 Replay Protected Memory Block: Not Supported 00:29:20.584 00:29:20.584 Firmware Slot Information 00:29:20.584 ========================= 00:29:20.584 Active slot: 1 00:29:20.584 Slot 1 Firmware Revision: 1.0 00:29:20.584 00:29:20.584 00:29:20.584 Commands Supported and Effects 00:29:20.584 ============================== 00:29:20.584 Admin Commands 00:29:20.584 -------------- 00:29:20.584 Delete I/O Submission Queue (00h): Supported 00:29:20.584 Create I/O Submission Queue (01h): Supported 00:29:20.584 Get Log Page (02h): Supported 00:29:20.584 Delete I/O Completion Queue (04h): Supported 00:29:20.584 Create I/O Completion Queue (05h): Supported 00:29:20.584 Identify (06h): Supported 00:29:20.584 Abort (08h): Supported 00:29:20.584 Set Features (09h): Supported 00:29:20.584 Get Features (0Ah): Supported 00:29:20.584 Asynchronous Event Request (0Ch): Supported 00:29:20.584 Namespace Attachment (15h): Supported NS-Inventory-Change 00:29:20.584 Directive Send (19h): Supported 00:29:20.584 Directive Receive (1Ah): Supported 00:29:20.584 Virtualization Management (1Ch): Supported 00:29:20.584 Doorbell Buffer Config (7Ch): Supported 00:29:20.584 Format NVM (80h): Supported LBA-Change 00:29:20.584 I/O Commands 00:29:20.584 ------------ 00:29:20.584 Flush (00h): Supported LBA-Change 00:29:20.584 Write (01h): Supported LBA-Change 00:29:20.584 Read (02h): Supported 00:29:20.584 Compare (05h): Supported 00:29:20.584 Write Zeroes (08h): Supported LBA-Change 00:29:20.584 Dataset Management (09h): Supported LBA-Change 00:29:20.584 Unknown (0Ch): Supported 00:29:20.584 Unknown (12h): Supported 00:29:20.584 Copy (19h): Supported LBA-Change 00:29:20.584 Unknown (1Dh): Supported LBA-Change 00:29:20.584 00:29:20.584 Error Log 00:29:20.584 ========= 00:29:20.584 00:29:20.584 Arbitration 00:29:20.584 =========== 00:29:20.584 Arbitration Burst: no limit 00:29:20.584 00:29:20.584 Power Management 00:29:20.584 ================ 00:29:20.584 Number of Power States: 1 00:29:20.584 Current Power State: Power State #0 00:29:20.584 Power State #0: 00:29:20.584 Max Power: 25.00 W 00:29:20.584 Non-Operational State: Operational 00:29:20.584 Entry Latency: 16 microseconds 00:29:20.584 Exit Latency: 4 microseconds 00:29:20.584 Relative Read Throughput: 0 00:29:20.584 Relative Read Latency: 0 00:29:20.584 Relative Write Throughput: 0 00:29:20.584 Relative Write Latency: 0 00:29:20.584 Idle Power: Not Reported 00:29:20.584 Active Power: Not Reported 00:29:20.584 Non-Operational Permissive Mode: Not Supported 00:29:20.584 00:29:20.584 Health Information 00:29:20.584 ================== 00:29:20.584 Critical Warnings: 00:29:20.584 Available Spare Space: OK 00:29:20.584 Temperature: OK 00:29:20.584 Device Reliability: OK 00:29:20.584 Read Only: No 00:29:20.584 Volatile Memory Backup: OK 00:29:20.584 Current Temperature: 323 Kelvin (50 Celsius) 00:29:20.584 Temperature Threshold: 343 Kelvin (70 Celsius) 00:29:20.584 Available Spare: 0% 00:29:20.584 Available Spare Threshold: 0% 00:29:20.584 Life Percentage Used: 0% 00:29:20.584 Data Units Read: 7801 00:29:20.584 Data Units Written: 3798 00:29:20.584 Host Read Commands: 367419 00:29:20.584 Host Write Commands: 198856 00:29:20.585 Controller Busy Time: 0 minutes 00:29:20.585 Power Cycles: 0 00:29:20.585 Power On Hours: 0 hours 00:29:20.585 Unsafe Shutdowns: 0 00:29:20.585 Unrecoverable Media Errors: 0 00:29:20.585 Lifetime Error Log Entries: 0 00:29:20.585 Warning Temperature Time: 0 minutes 00:29:20.585 Critical Temperature Time: 0 minutes 00:29:20.585 00:29:20.585 Number of Queues 00:29:20.585 ================ 00:29:20.585 Number of I/O Submission Queues: 64 00:29:20.585 Number of I/O Completion Queues: 64 00:29:20.585 00:29:20.585 ZNS Specific Controller Data 00:29:20.585 ============================ 00:29:20.585 Zone Append Size Limit: 0 00:29:20.585 00:29:20.585 00:29:20.585 Active Namespaces 00:29:20.585 ================= 00:29:20.585 Namespace ID:1 00:29:20.585 Error Recovery Timeout: Unlimited 00:29:20.585 Command Set Identifier: NVM (00h) 00:29:20.585 Deallocate: Supported 00:29:20.585 Deallocated/Unwritten Error: Supported 00:29:20.585 Deallocated Read Value: All 0x00 00:29:20.585 Deallocate in Write Zeroes: Not Supported 00:29:20.585 Deallocated Guard Field: 0xFFFF 00:29:20.585 Flush: Supported 00:29:20.585 Reservation: Not Supported 00:29:20.585 Namespace Sharing Capabilities: Private 00:29:20.585 Size (in LBAs): 1310720 (5GiB) 00:29:20.585 Capacity (in LBAs): 1310720 (5GiB) 00:29:20.585 Utilization (in LBAs): 1310720 (5GiB) 00:29:20.585 Thin Provisioning: Not Supported 00:29:20.585 Per-NS Atomic Units: No 00:29:20.585 Maximum Single Source Range Length: 128 00:29:20.585 Maximum Copy Length: 128 00:29:20.585 Maximum Source Range Count: 128 00:29:20.585 NGUID/EUI64 Never Reused: No 00:29:20.585 Namespace Write Protected: No 00:29:20.585 Number of LBA Formats: 8 00:29:20.585 Current LBA Format: LBA Format #04 00:29:20.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:20.585 LBA Format #01: Data Size: 512 Metadata Size: 8 00:29:20.585 LBA Format #02: Data Size: 512 Metadata Size: 16 00:29:20.585 LBA Format #03: Data Size: 512 Metadata Size: 64 00:29:20.585 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:29:20.585 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:29:20.585 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:29:20.585 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:29:20.585 00:29:20.585 00:29:20.585 real 0m0.662s 00:29:20.585 user 0m0.256s 00:29:20.585 sys 0m0.326s 00:29:20.585 05:26:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:20.585 ************************************ 00:29:20.585 END TEST nvme_identify 00:29:20.585 05:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:20.585 ************************************ 00:29:20.585 05:26:39 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:29:20.585 05:26:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:20.585 05:26:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:20.585 05:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:20.585 ************************************ 00:29:20.585 START TEST nvme_perf 00:29:20.585 ************************************ 00:29:20.585 05:26:39 -- common/autotest_common.sh@1104 -- # nvme_perf 00:29:20.585 05:26:39 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:29:21.963 Initializing NVMe Controllers 00:29:21.963 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:21.963 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:21.963 Initialization complete. Launching workers. 00:29:21.963 ======================================================== 00:29:21.963 Latency(us) 00:29:21.963 Device Information : IOPS MiB/s Average min max 00:29:21.963 PCIE (0000:00:06.0) NSID 1 from core 0: 58752.00 688.50 2178.37 1035.17 6475.80 00:29:21.963 ======================================================== 00:29:21.963 Total : 58752.00 688.50 2178.37 1035.17 6475.80 00:29:21.963 00:29:21.963 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:29:21.963 ================================================================================= 00:29:21.963 1.00000% : 1310.720us 00:29:21.963 10.00000% : 1496.902us 00:29:21.963 25.00000% : 1750.109us 00:29:21.963 50.00000% : 2159.709us 00:29:21.963 75.00000% : 2576.756us 00:29:21.963 90.00000% : 2829.964us 00:29:21.963 95.00000% : 3068.276us 00:29:21.963 98.00000% : 3336.378us 00:29:21.963 99.00000% : 3455.535us 00:29:21.963 99.50000% : 3544.902us 00:29:21.963 99.90000% : 4706.676us 00:29:21.963 99.99000% : 6315.287us 00:29:21.963 99.99900% : 6494.022us 00:29:21.963 99.99990% : 6494.022us 00:29:21.963 99.99999% : 6494.022us 00:29:21.963 00:29:21.963 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:29:21.964 ============================================================================== 00:29:21.964 Range in us Cumulative IO count 00:29:21.964 1027.724 - 1035.171: 0.0017% ( 1) 00:29:21.964 1035.171 - 1042.618: 0.0051% ( 2) 00:29:21.964 1050.065 - 1057.513: 0.0068% ( 1) 00:29:21.964 1057.513 - 1064.960: 0.0085% ( 1) 00:29:21.964 1072.407 - 1079.855: 0.0136% ( 3) 00:29:21.964 1087.302 - 1094.749: 0.0153% ( 1) 00:29:21.964 1094.749 - 1102.196: 0.0170% ( 1) 00:29:21.964 1102.196 - 1109.644: 0.0221% ( 3) 00:29:21.964 1117.091 - 1124.538: 0.0255% ( 2) 00:29:21.964 1124.538 - 1131.985: 0.0272% ( 1) 00:29:21.964 1131.985 - 1139.433: 0.0289% ( 1) 00:29:21.964 1139.433 - 1146.880: 0.0306% ( 1) 00:29:21.964 1146.880 - 1154.327: 0.0323% ( 1) 00:29:21.964 1154.327 - 1161.775: 0.0357% ( 2) 00:29:21.964 1161.775 - 1169.222: 0.0374% ( 1) 00:29:21.964 1169.222 - 1176.669: 0.0408% ( 2) 00:29:21.964 1176.669 - 1184.116: 0.0477% ( 4) 00:29:21.964 1184.116 - 1191.564: 0.0528% ( 3) 00:29:21.964 1191.564 - 1199.011: 0.0596% ( 4) 00:29:21.964 1199.011 - 1206.458: 0.0698% ( 6) 00:29:21.964 1206.458 - 1213.905: 0.0834% ( 8) 00:29:21.964 1213.905 - 1221.353: 0.0970% ( 8) 00:29:21.964 1221.353 - 1228.800: 0.1260% ( 17) 00:29:21.964 1228.800 - 1236.247: 0.1532% ( 16) 00:29:21.964 1236.247 - 1243.695: 0.1974% ( 26) 00:29:21.964 1243.695 - 1251.142: 0.2434% ( 27) 00:29:21.964 1251.142 - 1258.589: 0.2996% ( 33) 00:29:21.964 1258.589 - 1266.036: 0.3676% ( 40) 00:29:21.964 1266.036 - 1273.484: 0.4510% ( 49) 00:29:21.964 1273.484 - 1280.931: 0.5719% ( 71) 00:29:21.964 1280.931 - 1288.378: 0.6961% ( 73) 00:29:21.964 1288.378 - 1295.825: 0.8255% ( 76) 00:29:21.964 1295.825 - 1303.273: 0.9770% ( 89) 00:29:21.964 1303.273 - 1310.720: 1.1506% ( 102) 00:29:21.964 1310.720 - 1318.167: 1.3565% ( 121) 00:29:21.964 1318.167 - 1325.615: 1.5778% ( 130) 00:29:21.964 1325.615 - 1333.062: 1.8076% ( 135) 00:29:21.964 1333.062 - 1340.509: 2.0850% ( 163) 00:29:21.964 1340.509 - 1347.956: 2.3591% ( 161) 00:29:21.964 1347.956 - 1355.404: 2.6433% ( 167) 00:29:21.964 1355.404 - 1362.851: 2.9412% ( 175) 00:29:21.964 1362.851 - 1370.298: 3.2680% ( 192) 00:29:21.964 1370.298 - 1377.745: 3.6033% ( 197) 00:29:21.964 1377.745 - 1385.193: 3.9590% ( 209) 00:29:21.964 1385.193 - 1392.640: 4.2943% ( 197) 00:29:21.964 1392.640 - 1400.087: 4.6449% ( 206) 00:29:21.964 1400.087 - 1407.535: 5.0109% ( 215) 00:29:21.964 1407.535 - 1414.982: 5.4041% ( 231) 00:29:21.964 1414.982 - 1422.429: 5.7921% ( 228) 00:29:21.964 1422.429 - 1429.876: 6.2211% ( 252) 00:29:21.964 1429.876 - 1437.324: 6.5989% ( 222) 00:29:21.964 1437.324 - 1444.771: 7.0125% ( 243) 00:29:21.964 1444.771 - 1452.218: 7.4670% ( 267) 00:29:21.964 1452.218 - 1459.665: 7.8789% ( 242) 00:29:21.964 1459.665 - 1467.113: 8.2976% ( 246) 00:29:21.964 1467.113 - 1474.560: 8.7520% ( 267) 00:29:21.964 1474.560 - 1482.007: 9.1946% ( 260) 00:29:21.964 1482.007 - 1489.455: 9.6048% ( 241) 00:29:21.964 1489.455 - 1496.902: 10.0763% ( 277) 00:29:21.964 1496.902 - 1504.349: 10.5103% ( 255) 00:29:21.964 1504.349 - 1511.796: 10.9545% ( 261) 00:29:21.964 1511.796 - 1519.244: 11.4175% ( 272) 00:29:21.964 1519.244 - 1526.691: 11.8396% ( 248) 00:29:21.964 1526.691 - 1534.138: 12.3468% ( 298) 00:29:21.964 1534.138 - 1541.585: 12.7689% ( 248) 00:29:21.964 1541.585 - 1549.033: 13.2438% ( 279) 00:29:21.964 1549.033 - 1556.480: 13.6846% ( 259) 00:29:21.964 1556.480 - 1563.927: 14.1153% ( 253) 00:29:21.964 1563.927 - 1571.375: 14.5765% ( 271) 00:29:21.964 1571.375 - 1578.822: 15.0361% ( 270) 00:29:21.964 1578.822 - 1586.269: 15.4735% ( 257) 00:29:21.964 1586.269 - 1593.716: 15.9416% ( 275) 00:29:21.964 1593.716 - 1601.164: 16.3705% ( 252) 00:29:21.964 1601.164 - 1608.611: 16.8386% ( 275) 00:29:21.964 1608.611 - 1616.058: 17.2828% ( 261) 00:29:21.964 1616.058 - 1623.505: 17.7322% ( 264) 00:29:21.964 1623.505 - 1630.953: 18.1866% ( 267) 00:29:21.964 1630.953 - 1638.400: 18.6394% ( 266) 00:29:21.964 1638.400 - 1645.847: 19.0802% ( 259) 00:29:21.964 1645.847 - 1653.295: 19.5278% ( 263) 00:29:21.964 1653.295 - 1660.742: 19.9891% ( 271) 00:29:21.964 1660.742 - 1668.189: 20.4368% ( 263) 00:29:21.964 1668.189 - 1675.636: 20.8861% ( 264) 00:29:21.964 1675.636 - 1683.084: 21.3508% ( 273) 00:29:21.964 1683.084 - 1690.531: 21.8069% ( 268) 00:29:21.964 1690.531 - 1697.978: 22.2546% ( 263) 00:29:21.964 1697.978 - 1705.425: 22.7209% ( 274) 00:29:21.964 1705.425 - 1712.873: 23.1481% ( 251) 00:29:21.964 1712.873 - 1720.320: 23.6162% ( 275) 00:29:21.964 1720.320 - 1727.767: 24.0673% ( 265) 00:29:21.964 1727.767 - 1735.215: 24.5251% ( 269) 00:29:21.964 1735.215 - 1742.662: 24.9677% ( 260) 00:29:21.964 1742.662 - 1750.109: 25.4306% ( 272) 00:29:21.964 1750.109 - 1757.556: 25.8783% ( 263) 00:29:21.964 1757.556 - 1765.004: 26.3259% ( 263) 00:29:21.964 1765.004 - 1772.451: 26.7770% ( 265) 00:29:21.964 1772.451 - 1779.898: 27.2535% ( 280) 00:29:21.964 1779.898 - 1787.345: 27.6961% ( 260) 00:29:21.964 1787.345 - 1794.793: 28.1437% ( 263) 00:29:21.964 1794.793 - 1802.240: 28.5914% ( 263) 00:29:21.964 1802.240 - 1809.687: 29.0628% ( 277) 00:29:21.964 1809.687 - 1817.135: 29.5054% ( 260) 00:29:21.964 1817.135 - 1824.582: 29.9564% ( 265) 00:29:21.964 1824.582 - 1832.029: 30.4194% ( 272) 00:29:21.964 1832.029 - 1839.476: 30.8721% ( 266) 00:29:21.964 1839.476 - 1846.924: 31.3198% ( 263) 00:29:21.964 1846.924 - 1854.371: 31.7793% ( 270) 00:29:21.964 1854.371 - 1861.818: 32.2185% ( 258) 00:29:21.964 1861.818 - 1869.265: 32.6627% ( 261) 00:29:21.964 1869.265 - 1876.713: 33.1342% ( 277) 00:29:21.964 1876.713 - 1884.160: 33.5478% ( 243) 00:29:21.964 1884.160 - 1891.607: 34.0210% ( 278) 00:29:21.964 1891.607 - 1899.055: 34.4669% ( 262) 00:29:21.964 1899.055 - 1906.502: 34.9009% ( 255) 00:29:21.964 1906.502 - 1921.396: 35.8132% ( 536) 00:29:21.964 1921.396 - 1936.291: 36.6830% ( 511) 00:29:21.964 1936.291 - 1951.185: 37.5732% ( 523) 00:29:21.964 1951.185 - 1966.080: 38.4566% ( 519) 00:29:21.964 1966.080 - 1980.975: 39.3416% ( 520) 00:29:21.964 1980.975 - 1995.869: 40.2522% ( 535) 00:29:21.964 1995.869 - 2010.764: 41.1407% ( 522) 00:29:21.964 2010.764 - 2025.658: 42.0598% ( 540) 00:29:21.964 2025.658 - 2040.553: 42.9568% ( 527) 00:29:21.964 2040.553 - 2055.447: 43.8708% ( 537) 00:29:21.964 2055.447 - 2070.342: 44.7883% ( 539) 00:29:21.964 2070.342 - 2085.236: 45.7159% ( 545) 00:29:21.964 2085.236 - 2100.131: 46.6163% ( 529) 00:29:21.964 2100.131 - 2115.025: 47.5354% ( 540) 00:29:21.964 2115.025 - 2129.920: 48.4392% ( 531) 00:29:21.964 2129.920 - 2144.815: 49.3243% ( 520) 00:29:21.964 2144.815 - 2159.709: 50.2315% ( 533) 00:29:21.964 2159.709 - 2174.604: 51.1285% ( 527) 00:29:21.964 2174.604 - 2189.498: 52.0357% ( 533) 00:29:21.964 2189.498 - 2204.393: 52.9531% ( 539) 00:29:21.964 2204.393 - 2219.287: 53.8126% ( 505) 00:29:21.964 2219.287 - 2234.182: 54.7198% ( 533) 00:29:21.964 2234.182 - 2249.076: 55.6015% ( 518) 00:29:21.964 2249.076 - 2263.971: 56.4883% ( 521) 00:29:21.964 2263.971 - 2278.865: 57.3938% ( 532) 00:29:21.964 2278.865 - 2293.760: 58.2653% ( 512) 00:29:21.964 2293.760 - 2308.655: 59.1980% ( 548) 00:29:21.964 2308.655 - 2323.549: 60.1137% ( 538) 00:29:21.964 2323.549 - 2338.444: 61.0124% ( 528) 00:29:21.964 2338.444 - 2353.338: 61.9196% ( 533) 00:29:21.964 2353.338 - 2368.233: 62.8234% ( 531) 00:29:21.964 2368.233 - 2383.127: 63.7187% ( 526) 00:29:21.964 2383.127 - 2398.022: 64.6225% ( 531) 00:29:21.964 2398.022 - 2412.916: 65.5042% ( 518) 00:29:21.964 2412.916 - 2427.811: 66.4335% ( 546) 00:29:21.964 2427.811 - 2442.705: 67.3441% ( 535) 00:29:21.964 2442.705 - 2457.600: 68.2785% ( 549) 00:29:21.964 2457.600 - 2472.495: 69.1738% ( 526) 00:29:21.964 2472.495 - 2487.389: 70.0742% ( 529) 00:29:21.964 2487.389 - 2502.284: 70.9967% ( 542) 00:29:21.964 2502.284 - 2517.178: 71.9244% ( 545) 00:29:21.964 2517.178 - 2532.073: 72.8094% ( 520) 00:29:21.964 2532.073 - 2546.967: 73.7064% ( 527) 00:29:21.964 2546.967 - 2561.862: 74.6409% ( 549) 00:29:21.964 2561.862 - 2576.756: 75.5481% ( 533) 00:29:21.964 2576.756 - 2591.651: 76.4331% ( 520) 00:29:21.964 2591.651 - 2606.545: 77.3693% ( 550) 00:29:21.964 2606.545 - 2621.440: 78.2697% ( 529) 00:29:21.964 2621.440 - 2636.335: 79.1769% ( 533) 00:29:21.964 2636.335 - 2651.229: 80.0926% ( 538) 00:29:21.964 2651.229 - 2666.124: 81.0083% ( 538) 00:29:21.964 2666.124 - 2681.018: 81.8968% ( 522) 00:29:21.964 2681.018 - 2695.913: 82.7989% ( 530) 00:29:21.964 2695.913 - 2710.807: 83.7010% ( 530) 00:29:21.965 2710.807 - 2725.702: 84.5827% ( 518) 00:29:21.965 2725.702 - 2740.596: 85.4779% ( 526) 00:29:21.965 2740.596 - 2755.491: 86.3477% ( 511) 00:29:21.965 2755.491 - 2770.385: 87.1613% ( 478) 00:29:21.965 2770.385 - 2785.280: 87.9579% ( 468) 00:29:21.965 2785.280 - 2800.175: 88.6966% ( 434) 00:29:21.965 2800.175 - 2815.069: 89.4080% ( 418) 00:29:21.965 2815.069 - 2829.964: 90.0412% ( 372) 00:29:21.965 2829.964 - 2844.858: 90.6352% ( 349) 00:29:21.965 2844.858 - 2859.753: 91.1748% ( 317) 00:29:21.965 2859.753 - 2874.647: 91.6445% ( 276) 00:29:21.965 2874.647 - 2889.542: 92.0905% ( 262) 00:29:21.965 2889.542 - 2904.436: 92.4871% ( 233) 00:29:21.965 2904.436 - 2919.331: 92.8275% ( 200) 00:29:21.965 2919.331 - 2934.225: 93.1407% ( 184) 00:29:21.965 2934.225 - 2949.120: 93.4045% ( 155) 00:29:21.965 2949.120 - 2964.015: 93.6530% ( 146) 00:29:21.965 2964.015 - 2978.909: 93.8811% ( 134) 00:29:21.965 2978.909 - 2993.804: 94.1040% ( 131) 00:29:21.965 2993.804 - 3008.698: 94.3100% ( 121) 00:29:21.965 3008.698 - 3023.593: 94.5057% ( 115) 00:29:21.965 3023.593 - 3038.487: 94.7015% ( 115) 00:29:21.965 3038.487 - 3053.382: 94.8904% ( 111) 00:29:21.965 3053.382 - 3068.276: 95.0725% ( 107) 00:29:21.965 3068.276 - 3083.171: 95.2444% ( 101) 00:29:21.965 3083.171 - 3098.065: 95.4112% ( 98) 00:29:21.965 3098.065 - 3112.960: 95.5797% ( 99) 00:29:21.965 3112.960 - 3127.855: 95.7550% ( 103) 00:29:21.965 3127.855 - 3142.749: 95.9201% ( 97) 00:29:21.965 3142.749 - 3157.644: 96.0835% ( 96) 00:29:21.965 3157.644 - 3172.538: 96.2571% ( 102) 00:29:21.965 3172.538 - 3187.433: 96.4257% ( 99) 00:29:21.965 3187.433 - 3202.327: 96.5976% ( 101) 00:29:21.965 3202.327 - 3217.222: 96.7576% ( 94) 00:29:21.965 3217.222 - 3232.116: 96.9261% ( 99) 00:29:21.965 3232.116 - 3247.011: 97.0844% ( 93) 00:29:21.965 3247.011 - 3261.905: 97.2512% ( 98) 00:29:21.965 3261.905 - 3276.800: 97.4129% ( 95) 00:29:21.965 3276.800 - 3291.695: 97.5746% ( 95) 00:29:21.965 3291.695 - 3306.589: 97.7294% ( 91) 00:29:21.965 3306.589 - 3321.484: 97.8843% ( 91) 00:29:21.965 3321.484 - 3336.378: 98.0290% ( 85) 00:29:21.965 3336.378 - 3351.273: 98.1652% ( 80) 00:29:21.965 3351.273 - 3366.167: 98.3030% ( 81) 00:29:21.965 3366.167 - 3381.062: 98.4375% ( 79) 00:29:21.965 3381.062 - 3395.956: 98.5720% ( 79) 00:29:21.965 3395.956 - 3410.851: 98.6945% ( 72) 00:29:21.965 3410.851 - 3425.745: 98.8154% ( 71) 00:29:21.965 3425.745 - 3440.640: 98.9294% ( 67) 00:29:21.965 3440.640 - 3455.535: 99.0485% ( 70) 00:29:21.965 3455.535 - 3470.429: 99.1524% ( 61) 00:29:21.965 3470.429 - 3485.324: 99.2545% ( 60) 00:29:21.965 3485.324 - 3500.218: 99.3464% ( 54) 00:29:21.965 3500.218 - 3515.113: 99.4196% ( 43) 00:29:21.965 3515.113 - 3530.007: 99.4877% ( 40) 00:29:21.965 3530.007 - 3544.902: 99.5507% ( 37) 00:29:21.965 3544.902 - 3559.796: 99.6051% ( 32) 00:29:21.965 3559.796 - 3574.691: 99.6477% ( 25) 00:29:21.965 3574.691 - 3589.585: 99.6783% ( 18) 00:29:21.965 3589.585 - 3604.480: 99.7021% ( 14) 00:29:21.965 3604.480 - 3619.375: 99.7175% ( 9) 00:29:21.965 3619.375 - 3634.269: 99.7328% ( 9) 00:29:21.965 3634.269 - 3649.164: 99.7379% ( 3) 00:29:21.965 3649.164 - 3664.058: 99.7481% ( 6) 00:29:21.965 3664.058 - 3678.953: 99.7515% ( 2) 00:29:21.965 3678.953 - 3693.847: 99.7549% ( 2) 00:29:21.965 3693.847 - 3708.742: 99.7566% ( 1) 00:29:21.965 3708.742 - 3723.636: 99.7617% ( 3) 00:29:21.965 3723.636 - 3738.531: 99.7651% ( 2) 00:29:21.965 3738.531 - 3753.425: 99.7702% ( 3) 00:29:21.965 3753.425 - 3768.320: 99.7719% ( 1) 00:29:21.965 3768.320 - 3783.215: 99.7770% ( 3) 00:29:21.965 3783.215 - 3798.109: 99.7804% ( 2) 00:29:21.965 3798.109 - 3813.004: 99.7855% ( 3) 00:29:21.965 3813.004 - 3842.793: 99.7923% ( 4) 00:29:21.965 3842.793 - 3872.582: 99.7992% ( 4) 00:29:21.965 3872.582 - 3902.371: 99.8060% ( 4) 00:29:21.965 3902.371 - 3932.160: 99.8094% ( 2) 00:29:21.965 3932.160 - 3961.949: 99.8145% ( 3) 00:29:21.965 3961.949 - 3991.738: 99.8196% ( 3) 00:29:21.965 3991.738 - 4021.527: 99.8247% ( 3) 00:29:21.965 4021.527 - 4051.316: 99.8281% ( 2) 00:29:21.965 4051.316 - 4081.105: 99.8332% ( 3) 00:29:21.965 4081.105 - 4110.895: 99.8383% ( 3) 00:29:21.965 4110.895 - 4140.684: 99.8434% ( 3) 00:29:21.965 4140.684 - 4170.473: 99.8468% ( 2) 00:29:21.965 4170.473 - 4200.262: 99.8519% ( 3) 00:29:21.965 4200.262 - 4230.051: 99.8570% ( 3) 00:29:21.965 4230.051 - 4259.840: 99.8621% ( 3) 00:29:21.965 4259.840 - 4289.629: 99.8655% ( 2) 00:29:21.965 4289.629 - 4319.418: 99.8706% ( 3) 00:29:21.965 4319.418 - 4349.207: 99.8757% ( 3) 00:29:21.965 4349.207 - 4378.996: 99.8792% ( 2) 00:29:21.965 4378.996 - 4408.785: 99.8826% ( 2) 00:29:21.965 4408.785 - 4438.575: 99.8843% ( 1) 00:29:21.965 4438.575 - 4468.364: 99.8860% ( 1) 00:29:21.965 4468.364 - 4498.153: 99.8877% ( 1) 00:29:21.965 4498.153 - 4527.942: 99.8894% ( 1) 00:29:21.965 4527.942 - 4557.731: 99.8911% ( 1) 00:29:21.965 4557.731 - 4587.520: 99.8928% ( 1) 00:29:21.965 4587.520 - 4617.309: 99.8945% ( 1) 00:29:21.965 4617.309 - 4647.098: 99.8979% ( 2) 00:29:21.965 4647.098 - 4676.887: 99.8996% ( 1) 00:29:21.965 4676.887 - 4706.676: 99.9013% ( 1) 00:29:21.965 4706.676 - 4736.465: 99.9030% ( 1) 00:29:21.965 4736.465 - 4766.255: 99.9047% ( 1) 00:29:21.965 4766.255 - 4796.044: 99.9064% ( 1) 00:29:21.965 4825.833 - 4855.622: 99.9098% ( 2) 00:29:21.965 4885.411 - 4915.200: 99.9115% ( 1) 00:29:21.965 4915.200 - 4944.989: 99.9132% ( 1) 00:29:21.965 4944.989 - 4974.778: 99.9149% ( 1) 00:29:21.965 4974.778 - 5004.567: 99.9166% ( 1) 00:29:21.965 5004.567 - 5034.356: 99.9183% ( 1) 00:29:21.965 5034.356 - 5064.145: 99.9217% ( 2) 00:29:21.965 5064.145 - 5093.935: 99.9234% ( 1) 00:29:21.965 5123.724 - 5153.513: 99.9268% ( 2) 00:29:21.965 5153.513 - 5183.302: 99.9285% ( 1) 00:29:21.965 5183.302 - 5213.091: 99.9302% ( 1) 00:29:21.965 5213.091 - 5242.880: 99.9319% ( 1) 00:29:21.965 5242.880 - 5272.669: 99.9336% ( 1) 00:29:21.965 5272.669 - 5302.458: 99.9353% ( 1) 00:29:21.965 5302.458 - 5332.247: 99.9370% ( 1) 00:29:21.965 5332.247 - 5362.036: 99.9387% ( 1) 00:29:21.965 5362.036 - 5391.825: 99.9404% ( 1) 00:29:21.965 5391.825 - 5421.615: 99.9421% ( 1) 00:29:21.965 5421.615 - 5451.404: 99.9438% ( 1) 00:29:21.965 5451.404 - 5481.193: 99.9455% ( 1) 00:29:21.965 5481.193 - 5510.982: 99.9472% ( 1) 00:29:21.965 5510.982 - 5540.771: 99.9489% ( 1) 00:29:21.965 5540.771 - 5570.560: 99.9506% ( 1) 00:29:21.965 5570.560 - 5600.349: 99.9523% ( 1) 00:29:21.965 5600.349 - 5630.138: 99.9540% ( 1) 00:29:21.965 5630.138 - 5659.927: 99.9557% ( 1) 00:29:21.965 5659.927 - 5689.716: 99.9574% ( 1) 00:29:21.965 5689.716 - 5719.505: 99.9592% ( 1) 00:29:21.965 5749.295 - 5779.084: 99.9609% ( 1) 00:29:21.965 5779.084 - 5808.873: 99.9626% ( 1) 00:29:21.965 5808.873 - 5838.662: 99.9660% ( 2) 00:29:21.965 5838.662 - 5868.451: 99.9677% ( 1) 00:29:21.965 5868.451 - 5898.240: 99.9694% ( 1) 00:29:21.965 5898.240 - 5928.029: 99.9711% ( 1) 00:29:21.965 5928.029 - 5957.818: 99.9728% ( 1) 00:29:21.965 5957.818 - 5987.607: 99.9745% ( 1) 00:29:21.965 5987.607 - 6017.396: 99.9762% ( 1) 00:29:21.965 6017.396 - 6047.185: 99.9779% ( 1) 00:29:21.965 6047.185 - 6076.975: 99.9796% ( 1) 00:29:21.965 6106.764 - 6136.553: 99.9830% ( 2) 00:29:21.965 6136.553 - 6166.342: 99.9847% ( 1) 00:29:21.965 6166.342 - 6196.131: 99.9864% ( 1) 00:29:21.965 6196.131 - 6225.920: 99.9881% ( 1) 00:29:21.965 6225.920 - 6255.709: 99.9898% ( 1) 00:29:21.965 6285.498 - 6315.287: 99.9932% ( 2) 00:29:21.965 6315.287 - 6345.076: 99.9949% ( 1) 00:29:21.965 6345.076 - 6374.865: 99.9966% ( 1) 00:29:21.965 6374.865 - 6404.655: 99.9983% ( 1) 00:29:21.965 6464.233 - 6494.022: 100.0000% ( 1) 00:29:21.965 00:29:21.965 05:26:40 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:29:23.342 Initializing NVMe Controllers 00:29:23.342 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:23.342 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:23.342 Initialization complete. Launching workers. 00:29:23.342 ======================================================== 00:29:23.342 Latency(us) 00:29:23.342 Device Information : IOPS MiB/s Average min max 00:29:23.342 PCIE (0000:00:06.0) NSID 1 from core 0: 49400.94 578.92 2594.47 1489.68 5641.45 00:29:23.342 ======================================================== 00:29:23.342 Total : 49400.94 578.92 2594.47 1489.68 5641.45 00:29:23.342 00:29:23.342 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:29:23.342 ================================================================================= 00:29:23.342 1.00000% : 1772.451us 00:29:23.342 10.00000% : 1966.080us 00:29:23.342 25.00000% : 2189.498us 00:29:23.342 50.00000% : 2591.651us 00:29:23.342 75.00000% : 2993.804us 00:29:23.342 90.00000% : 3247.011us 00:29:23.342 95.00000% : 3366.167us 00:29:23.342 98.00000% : 3515.113us 00:29:23.342 99.00000% : 3619.375us 00:29:23.342 99.50000% : 3708.742us 00:29:23.342 99.90000% : 4557.731us 00:29:23.342 99.99000% : 5540.771us 00:29:23.342 99.99900% : 5659.927us 00:29:23.342 99.99990% : 5659.927us 00:29:23.342 99.99999% : 5659.927us 00:29:23.342 00:29:23.342 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:29:23.342 ============================================================================== 00:29:23.342 Range in us Cumulative IO count 00:29:23.342 1489.455 - 1496.902: 0.0040% ( 2) 00:29:23.342 1504.349 - 1511.796: 0.0061% ( 1) 00:29:23.342 1511.796 - 1519.244: 0.0081% ( 1) 00:29:23.342 1549.033 - 1556.480: 0.0101% ( 1) 00:29:23.342 1556.480 - 1563.927: 0.0121% ( 1) 00:29:23.342 1578.822 - 1586.269: 0.0162% ( 2) 00:29:23.342 1586.269 - 1593.716: 0.0182% ( 1) 00:29:23.342 1593.716 - 1601.164: 0.0202% ( 1) 00:29:23.342 1608.611 - 1616.058: 0.0223% ( 1) 00:29:23.343 1630.953 - 1638.400: 0.0283% ( 3) 00:29:23.343 1638.400 - 1645.847: 0.0445% ( 8) 00:29:23.343 1645.847 - 1653.295: 0.0567% ( 6) 00:29:23.343 1653.295 - 1660.742: 0.0708% ( 7) 00:29:23.343 1660.742 - 1668.189: 0.0931% ( 11) 00:29:23.343 1668.189 - 1675.636: 0.1093% ( 8) 00:29:23.343 1675.636 - 1683.084: 0.1518% ( 21) 00:29:23.343 1683.084 - 1690.531: 0.1801% ( 14) 00:29:23.343 1690.531 - 1697.978: 0.2307% ( 25) 00:29:23.343 1697.978 - 1705.425: 0.2793% ( 24) 00:29:23.343 1705.425 - 1712.873: 0.3542% ( 37) 00:29:23.343 1712.873 - 1720.320: 0.4109% ( 28) 00:29:23.343 1720.320 - 1727.767: 0.4918% ( 40) 00:29:23.343 1727.767 - 1735.215: 0.5829% ( 45) 00:29:23.343 1735.215 - 1742.662: 0.6699% ( 43) 00:29:23.343 1742.662 - 1750.109: 0.7428% ( 36) 00:29:23.343 1750.109 - 1757.556: 0.8622% ( 59) 00:29:23.343 1757.556 - 1765.004: 0.9958% ( 66) 00:29:23.343 1765.004 - 1772.451: 1.1638% ( 83) 00:29:23.343 1772.451 - 1779.898: 1.3075% ( 71) 00:29:23.343 1779.898 - 1787.345: 1.4573% ( 74) 00:29:23.343 1787.345 - 1794.793: 1.6900% ( 115) 00:29:23.343 1794.793 - 1802.240: 1.8803% ( 94) 00:29:23.343 1802.240 - 1809.687: 2.1191% ( 118) 00:29:23.343 1809.687 - 1817.135: 2.3640% ( 121) 00:29:23.343 1817.135 - 1824.582: 2.6413% ( 137) 00:29:23.343 1824.582 - 1832.029: 2.9287% ( 142) 00:29:23.343 1832.029 - 1839.476: 3.2019% ( 135) 00:29:23.343 1839.476 - 1846.924: 3.4934% ( 144) 00:29:23.343 1846.924 - 1854.371: 3.8496% ( 176) 00:29:23.343 1854.371 - 1861.818: 4.1835% ( 165) 00:29:23.343 1861.818 - 1869.265: 4.5418% ( 177) 00:29:23.343 1869.265 - 1876.713: 4.9648% ( 209) 00:29:23.343 1876.713 - 1884.160: 5.3412% ( 186) 00:29:23.343 1884.160 - 1891.607: 5.7845% ( 219) 00:29:23.343 1891.607 - 1899.055: 6.1954% ( 203) 00:29:23.343 1899.055 - 1906.502: 6.6346% ( 217) 00:29:23.343 1906.502 - 1921.396: 7.5899% ( 472) 00:29:23.343 1921.396 - 1936.291: 8.5290% ( 464) 00:29:23.343 1936.291 - 1951.185: 9.5106% ( 485) 00:29:23.343 1951.185 - 1966.080: 10.3910% ( 435) 00:29:23.343 1966.080 - 1980.975: 11.3929% ( 495) 00:29:23.343 1980.975 - 1995.869: 12.4049% ( 500) 00:29:23.343 1995.869 - 2010.764: 13.4391% ( 511) 00:29:23.343 2010.764 - 2025.658: 14.3884% ( 469) 00:29:23.343 2025.658 - 2040.553: 15.4105% ( 505) 00:29:23.343 2040.553 - 2055.447: 16.3476% ( 463) 00:29:23.343 2055.447 - 2070.342: 17.2988% ( 470) 00:29:23.343 2070.342 - 2085.236: 18.3270% ( 508) 00:29:23.343 2085.236 - 2100.131: 19.3329% ( 497) 00:29:23.343 2100.131 - 2115.025: 20.3489% ( 502) 00:29:23.343 2115.025 - 2129.920: 21.3467% ( 493) 00:29:23.343 2129.920 - 2144.815: 22.3729% ( 507) 00:29:23.343 2144.815 - 2159.709: 23.3525% ( 484) 00:29:23.343 2159.709 - 2174.604: 24.3260% ( 481) 00:29:23.343 2174.604 - 2189.498: 25.3279% ( 495) 00:29:23.343 2189.498 - 2204.393: 26.2791% ( 470) 00:29:23.343 2204.393 - 2219.287: 27.2628% ( 486) 00:29:23.343 2219.287 - 2234.182: 28.2667% ( 496) 00:29:23.343 2234.182 - 2249.076: 29.2463% ( 484) 00:29:23.343 2249.076 - 2263.971: 30.2198% ( 481) 00:29:23.343 2263.971 - 2278.865: 31.2237% ( 496) 00:29:23.343 2278.865 - 2293.760: 32.1648% ( 465) 00:29:23.343 2293.760 - 2308.655: 33.1424% ( 483) 00:29:23.343 2308.655 - 2323.549: 34.1341% ( 490) 00:29:23.343 2323.549 - 2338.444: 35.1117% ( 483) 00:29:23.343 2338.444 - 2353.338: 36.0650% ( 471) 00:29:23.343 2353.338 - 2368.233: 37.0648% ( 494) 00:29:23.343 2368.233 - 2383.127: 38.0019% ( 463) 00:29:23.343 2383.127 - 2398.022: 38.9228% ( 455) 00:29:23.343 2398.022 - 2412.916: 39.8721% ( 469) 00:29:23.343 2412.916 - 2427.811: 40.7849% ( 451) 00:29:23.343 2427.811 - 2442.705: 41.6896% ( 447) 00:29:23.343 2442.705 - 2457.600: 42.6105% ( 455) 00:29:23.343 2457.600 - 2472.495: 43.5233% ( 451) 00:29:23.343 2472.495 - 2487.389: 44.4078% ( 437) 00:29:23.343 2487.389 - 2502.284: 45.3267% ( 454) 00:29:23.343 2502.284 - 2517.178: 46.1747% ( 419) 00:29:23.343 2517.178 - 2532.073: 47.0410% ( 428) 00:29:23.343 2532.073 - 2546.967: 47.9437% ( 446) 00:29:23.343 2546.967 - 2561.862: 48.8301% ( 438) 00:29:23.343 2561.862 - 2576.756: 49.7247% ( 442) 00:29:23.343 2576.756 - 2591.651: 50.6295% ( 447) 00:29:23.343 2591.651 - 2606.545: 51.5139% ( 437) 00:29:23.343 2606.545 - 2621.440: 52.4146% ( 445) 00:29:23.343 2621.440 - 2636.335: 53.3395% ( 457) 00:29:23.343 2636.335 - 2651.229: 54.2402% ( 445) 00:29:23.343 2651.229 - 2666.124: 55.1429% ( 446) 00:29:23.343 2666.124 - 2681.018: 56.0577% ( 452) 00:29:23.343 2681.018 - 2695.913: 56.9989% ( 465) 00:29:23.343 2695.913 - 2710.807: 57.8955% ( 443) 00:29:23.343 2710.807 - 2725.702: 58.8326% ( 463) 00:29:23.343 2725.702 - 2740.596: 59.8061% ( 481) 00:29:23.343 2740.596 - 2755.491: 60.7250% ( 454) 00:29:23.343 2755.491 - 2770.385: 61.6945% ( 479) 00:29:23.343 2770.385 - 2785.280: 62.6457% ( 470) 00:29:23.343 2785.280 - 2800.175: 63.5666% ( 455) 00:29:23.343 2800.175 - 2815.069: 64.4896% ( 456) 00:29:23.343 2815.069 - 2829.964: 65.4267% ( 463) 00:29:23.343 2829.964 - 2844.858: 66.3759% ( 469) 00:29:23.343 2844.858 - 2859.753: 67.3373% ( 475) 00:29:23.343 2859.753 - 2874.647: 68.2663% ( 459) 00:29:23.343 2874.647 - 2889.542: 69.2358% ( 479) 00:29:23.343 2889.542 - 2904.436: 70.2275% ( 490) 00:29:23.343 2904.436 - 2919.331: 71.1707% ( 466) 00:29:23.343 2919.331 - 2934.225: 72.1341% ( 476) 00:29:23.343 2934.225 - 2949.120: 73.0853% ( 470) 00:29:23.343 2949.120 - 2964.015: 74.0325% ( 468) 00:29:23.343 2964.015 - 2978.909: 74.9696% ( 463) 00:29:23.343 2978.909 - 2993.804: 75.9209% ( 470) 00:29:23.343 2993.804 - 3008.698: 76.8722% ( 470) 00:29:23.343 3008.698 - 3023.593: 77.8315% ( 474) 00:29:23.343 3023.593 - 3038.487: 78.7484% ( 453) 00:29:23.343 3038.487 - 3053.382: 79.6956% ( 468) 00:29:23.343 3053.382 - 3068.276: 80.6205% ( 457) 00:29:23.343 3068.276 - 3083.171: 81.5455% ( 457) 00:29:23.343 3083.171 - 3098.065: 82.4401% ( 442) 00:29:23.343 3098.065 - 3112.960: 83.3569% ( 453) 00:29:23.343 3112.960 - 3127.855: 84.2596% ( 446) 00:29:23.343 3127.855 - 3142.749: 85.1522% ( 441) 00:29:23.343 3142.749 - 3157.644: 86.0185% ( 428) 00:29:23.343 3157.644 - 3172.538: 86.8382% ( 405) 00:29:23.343 3172.538 - 3187.433: 87.6741% ( 413) 00:29:23.343 3187.433 - 3202.327: 88.4715% ( 394) 00:29:23.343 3202.327 - 3217.222: 89.2366% ( 378) 00:29:23.343 3217.222 - 3232.116: 89.9935% ( 374) 00:29:23.343 3232.116 - 3247.011: 90.6979% ( 348) 00:29:23.343 3247.011 - 3261.905: 91.3476% ( 321) 00:29:23.343 3261.905 - 3276.800: 91.9993% ( 322) 00:29:23.343 3276.800 - 3291.695: 92.5903% ( 292) 00:29:23.343 3291.695 - 3306.589: 93.1853% ( 294) 00:29:23.343 3306.589 - 3321.484: 93.7196% ( 264) 00:29:23.343 3321.484 - 3336.378: 94.2155% ( 245) 00:29:23.343 3336.378 - 3351.273: 94.7094% ( 244) 00:29:23.343 3351.273 - 3366.167: 95.1364% ( 211) 00:29:23.343 3366.167 - 3381.062: 95.5291% ( 194) 00:29:23.343 3381.062 - 3395.956: 95.8914% ( 179) 00:29:23.343 3395.956 - 3410.851: 96.2496% ( 177) 00:29:23.343 3410.851 - 3425.745: 96.5755% ( 161) 00:29:23.343 3425.745 - 3440.640: 96.8871% ( 154) 00:29:23.343 3440.640 - 3455.535: 97.1563% ( 133) 00:29:23.343 3455.535 - 3470.429: 97.4093% ( 125) 00:29:23.343 3470.429 - 3485.324: 97.6623% ( 125) 00:29:23.343 3485.324 - 3500.218: 97.8769% ( 106) 00:29:23.343 3500.218 - 3515.113: 98.0712% ( 96) 00:29:23.343 3515.113 - 3530.007: 98.2412% ( 84) 00:29:23.343 3530.007 - 3544.902: 98.3950% ( 76) 00:29:23.343 3544.902 - 3559.796: 98.5427% ( 73) 00:29:23.343 3559.796 - 3574.691: 98.6642% ( 60) 00:29:23.343 3574.691 - 3589.585: 98.7876% ( 61) 00:29:23.343 3589.585 - 3604.480: 98.9253% ( 68) 00:29:23.343 3604.480 - 3619.375: 99.0184% ( 46) 00:29:23.343 3619.375 - 3634.269: 99.1236% ( 52) 00:29:23.343 3634.269 - 3649.164: 99.2268% ( 51) 00:29:23.343 3649.164 - 3664.058: 99.3058% ( 39) 00:29:23.343 3664.058 - 3678.953: 99.3888% ( 41) 00:29:23.343 3678.953 - 3693.847: 99.4596% ( 35) 00:29:23.343 3693.847 - 3708.742: 99.5122% ( 26) 00:29:23.343 3708.742 - 3723.636: 99.5507% ( 19) 00:29:23.343 3723.636 - 3738.531: 99.5790% ( 14) 00:29:23.343 3738.531 - 3753.425: 99.6175% ( 19) 00:29:23.343 3753.425 - 3768.320: 99.6418% ( 12) 00:29:23.343 3768.320 - 3783.215: 99.6580% ( 8) 00:29:23.343 3783.215 - 3798.109: 99.6782% ( 10) 00:29:23.343 3798.109 - 3813.004: 99.6924% ( 7) 00:29:23.343 3813.004 - 3842.793: 99.7227% ( 15) 00:29:23.343 3842.793 - 3872.582: 99.7430% ( 10) 00:29:23.343 3872.582 - 3902.371: 99.7571% ( 7) 00:29:23.343 3902.371 - 3932.160: 99.7713% ( 7) 00:29:23.343 3932.160 - 3961.949: 99.7875% ( 8) 00:29:23.343 3961.949 - 3991.738: 99.8037% ( 8) 00:29:23.343 3991.738 - 4021.527: 99.8158% ( 6) 00:29:23.343 4021.527 - 4051.316: 99.8320% ( 8) 00:29:23.343 4051.316 - 4081.105: 99.8421% ( 5) 00:29:23.343 4081.105 - 4110.895: 99.8523% ( 5) 00:29:23.343 4110.895 - 4140.684: 99.8644% ( 6) 00:29:23.343 4140.684 - 4170.473: 99.8725% ( 4) 00:29:23.343 4170.473 - 4200.262: 99.8745% ( 1) 00:29:23.343 4200.262 - 4230.051: 99.8765% ( 1) 00:29:23.343 4259.840 - 4289.629: 99.8806% ( 2) 00:29:23.343 4289.629 - 4319.418: 99.8826% ( 1) 00:29:23.343 4319.418 - 4349.207: 99.8867% ( 2) 00:29:23.343 4349.207 - 4378.996: 99.8887% ( 1) 00:29:23.344 4378.996 - 4408.785: 99.8907% ( 1) 00:29:23.344 4408.785 - 4438.575: 99.8927% ( 1) 00:29:23.344 4438.575 - 4468.364: 99.8948% ( 1) 00:29:23.344 4468.364 - 4498.153: 99.8968% ( 1) 00:29:23.344 4498.153 - 4527.942: 99.8988% ( 1) 00:29:23.344 4527.942 - 4557.731: 99.9008% ( 1) 00:29:23.344 4557.731 - 4587.520: 99.9028% ( 1) 00:29:23.344 4587.520 - 4617.309: 99.9049% ( 1) 00:29:23.344 4617.309 - 4647.098: 99.9089% ( 2) 00:29:23.344 4647.098 - 4676.887: 99.9109% ( 1) 00:29:23.344 4676.887 - 4706.676: 99.9130% ( 1) 00:29:23.344 4706.676 - 4736.465: 99.9150% ( 1) 00:29:23.344 4736.465 - 4766.255: 99.9170% ( 1) 00:29:23.344 4766.255 - 4796.044: 99.9211% ( 2) 00:29:23.344 4796.044 - 4825.833: 99.9231% ( 1) 00:29:23.344 4825.833 - 4855.622: 99.9271% ( 2) 00:29:23.344 4855.622 - 4885.411: 99.9292% ( 1) 00:29:23.344 4885.411 - 4915.200: 99.9332% ( 2) 00:29:23.344 4915.200 - 4944.989: 99.9352% ( 1) 00:29:23.344 4944.989 - 4974.778: 99.9393% ( 2) 00:29:23.344 4974.778 - 5004.567: 99.9413% ( 1) 00:29:23.344 5004.567 - 5034.356: 99.9454% ( 2) 00:29:23.344 5034.356 - 5064.145: 99.9474% ( 1) 00:29:23.344 5064.145 - 5093.935: 99.9514% ( 2) 00:29:23.344 5093.935 - 5123.724: 99.9534% ( 1) 00:29:23.344 5123.724 - 5153.513: 99.9575% ( 2) 00:29:23.344 5153.513 - 5183.302: 99.9595% ( 1) 00:29:23.344 5183.302 - 5213.091: 99.9636% ( 2) 00:29:23.344 5213.091 - 5242.880: 99.9656% ( 1) 00:29:23.344 5242.880 - 5272.669: 99.9696% ( 2) 00:29:23.344 5272.669 - 5302.458: 99.9717% ( 1) 00:29:23.344 5302.458 - 5332.247: 99.9737% ( 1) 00:29:23.344 5332.247 - 5362.036: 99.9777% ( 2) 00:29:23.344 5362.036 - 5391.825: 99.9798% ( 1) 00:29:23.344 5391.825 - 5421.615: 99.9838% ( 2) 00:29:23.344 5421.615 - 5451.404: 99.9858% ( 1) 00:29:23.344 5451.404 - 5481.193: 99.9879% ( 1) 00:29:23.344 5481.193 - 5510.982: 99.9899% ( 1) 00:29:23.344 5510.982 - 5540.771: 99.9919% ( 1) 00:29:23.344 5540.771 - 5570.560: 99.9939% ( 1) 00:29:23.344 5570.560 - 5600.349: 99.9980% ( 2) 00:29:23.344 5630.138 - 5659.927: 100.0000% ( 1) 00:29:23.344 00:29:23.344 05:26:42 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:29:23.344 00:29:23.344 real 0m2.625s 00:29:23.344 user 0m2.257s 00:29:23.344 sys 0m0.283s 00:29:23.344 05:26:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.344 05:26:42 -- common/autotest_common.sh@10 -- # set +x 00:29:23.344 ************************************ 00:29:23.344 END TEST nvme_perf 00:29:23.344 ************************************ 00:29:23.344 05:26:42 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:29:23.344 05:26:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:29:23.344 05:26:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:23.344 05:26:42 -- common/autotest_common.sh@10 -- # set +x 00:29:23.344 ************************************ 00:29:23.344 START TEST nvme_hello_world 00:29:23.344 ************************************ 00:29:23.344 05:26:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:29:23.603 Initializing NVMe Controllers 00:29:23.603 Attached to 0000:00:06.0 00:29:23.603 Namespace ID: 1 size: 5GB 00:29:23.603 Initialization complete. 00:29:23.603 INFO: using host memory buffer for IO 00:29:23.603 Hello world! 00:29:23.603 00:29:23.603 real 0m0.295s 00:29:23.603 user 0m0.111s 00:29:23.603 sys 0m0.145s 00:29:23.603 05:26:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.603 05:26:42 -- common/autotest_common.sh@10 -- # set +x 00:29:23.603 ************************************ 00:29:23.603 END TEST nvme_hello_world 00:29:23.603 ************************************ 00:29:23.603 05:26:42 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:29:23.603 05:26:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:23.603 05:26:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:23.603 05:26:42 -- common/autotest_common.sh@10 -- # set +x 00:29:23.603 ************************************ 00:29:23.603 START TEST nvme_sgl 00:29:23.603 ************************************ 00:29:23.603 05:26:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:29:23.862 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:29:23.862 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:29:23.862 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:29:24.121 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:29:24.121 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:29:24.121 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:29:24.121 NVMe Readv/Writev Request test 00:29:24.121 Attached to 0000:00:06.0 00:29:24.121 0000:00:06.0: build_io_request_2 test passed 00:29:24.121 0000:00:06.0: build_io_request_4 test passed 00:29:24.121 0000:00:06.0: build_io_request_5 test passed 00:29:24.121 0000:00:06.0: build_io_request_6 test passed 00:29:24.121 0000:00:06.0: build_io_request_7 test passed 00:29:24.121 0000:00:06.0: build_io_request_10 test passed 00:29:24.121 Cleaning up... 00:29:24.121 00:29:24.121 real 0m0.385s 00:29:24.121 user 0m0.191s 00:29:24.121 sys 0m0.145s 00:29:24.121 05:26:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.121 05:26:43 -- common/autotest_common.sh@10 -- # set +x 00:29:24.121 ************************************ 00:29:24.121 END TEST nvme_sgl 00:29:24.121 ************************************ 00:29:24.121 05:26:43 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:29:24.121 05:26:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.121 05:26:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.121 05:26:43 -- common/autotest_common.sh@10 -- # set +x 00:29:24.121 ************************************ 00:29:24.121 START TEST nvme_e2edp 00:29:24.121 ************************************ 00:29:24.121 05:26:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:29:24.393 NVMe Write/Read with End-to-End data protection test 00:29:24.393 Attached to 0000:00:06.0 00:29:24.393 Cleaning up... 00:29:24.393 00:29:24.393 real 0m0.281s 00:29:24.393 user 0m0.099s 00:29:24.393 sys 0m0.142s 00:29:24.393 05:26:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.393 05:26:43 -- common/autotest_common.sh@10 -- # set +x 00:29:24.393 ************************************ 00:29:24.393 END TEST nvme_e2edp 00:29:24.393 ************************************ 00:29:24.393 05:26:43 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:29:24.393 05:26:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.393 05:26:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.393 05:26:43 -- common/autotest_common.sh@10 -- # set +x 00:29:24.393 ************************************ 00:29:24.393 START TEST nvme_reserve 00:29:24.393 ************************************ 00:29:24.393 05:26:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:29:24.657 ===================================================== 00:29:24.657 NVMe Controller at PCI bus 0, device 6, function 0 00:29:24.657 ===================================================== 00:29:24.657 Reservations: Not Supported 00:29:24.657 Reservation test passed 00:29:24.657 00:29:24.657 real 0m0.234s 00:29:24.657 user 0m0.078s 00:29:24.657 sys 0m0.115s 00:29:24.657 05:26:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.657 05:26:43 -- common/autotest_common.sh@10 -- # set +x 00:29:24.657 ************************************ 00:29:24.657 END TEST nvme_reserve 00:29:24.657 ************************************ 00:29:24.657 05:26:43 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:29:24.657 05:26:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.657 05:26:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.657 05:26:43 -- common/autotest_common.sh@10 -- # set +x 00:29:24.657 ************************************ 00:29:24.657 START TEST nvme_err_injection 00:29:24.657 ************************************ 00:29:24.658 05:26:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:29:24.915 NVMe Error Injection test 00:29:24.915 Attached to 0000:00:06.0 00:29:24.915 0000:00:06.0: get features failed as expected 00:29:24.915 0000:00:06.0: get features successfully as expected 00:29:24.915 0000:00:06.0: read failed as expected 00:29:24.915 0000:00:06.0: read successfully as expected 00:29:24.915 Cleaning up... 00:29:24.915 00:29:24.915 real 0m0.296s 00:29:24.915 user 0m0.115s 00:29:24.915 sys 0m0.139s 00:29:24.915 05:26:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.915 05:26:43 -- common/autotest_common.sh@10 -- # set +x 00:29:24.915 ************************************ 00:29:24.915 END TEST nvme_err_injection 00:29:24.915 ************************************ 00:29:25.173 05:26:44 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:29:25.173 05:26:44 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:29:25.173 05:26:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.173 05:26:44 -- common/autotest_common.sh@10 -- # set +x 00:29:25.173 ************************************ 00:29:25.173 START TEST nvme_overhead 00:29:25.173 ************************************ 00:29:25.173 05:26:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:29:26.553 Initializing NVMe Controllers 00:29:26.553 Attached to 0000:00:06.0 00:29:26.553 Initialization complete. Launching workers. 00:29:26.553 submit (in ns) avg, min, max = 16644.2, 13064.5, 69867.3 00:29:26.553 complete (in ns) avg, min, max = 12982.5, 8953.6, 55138.2 00:29:26.553 00:29:26.553 Submit histogram 00:29:26.553 ================ 00:29:26.553 Range in us Cumulative Count 00:29:26.553 13.033 - 13.091: 0.0115% ( 1) 00:29:26.553 13.091 - 13.149: 0.0231% ( 1) 00:29:26.553 13.149 - 13.207: 0.0923% ( 6) 00:29:26.553 13.207 - 13.265: 0.1961% ( 9) 00:29:26.553 13.265 - 13.324: 0.3691% ( 15) 00:29:26.553 13.324 - 13.382: 0.9804% ( 53) 00:29:26.553 13.382 - 13.440: 2.3875% ( 122) 00:29:26.553 13.440 - 13.498: 5.5133% ( 271) 00:29:26.553 13.498 - 13.556: 10.5882% ( 440) 00:29:26.553 13.556 - 13.615: 17.0934% ( 564) 00:29:26.553 13.615 - 13.673: 22.9988% ( 512) 00:29:26.553 13.673 - 13.731: 28.6159% ( 487) 00:29:26.553 13.731 - 13.789: 32.6644% ( 351) 00:29:26.553 13.789 - 13.847: 37.4740% ( 417) 00:29:26.553 13.847 - 13.905: 42.1569% ( 406) 00:29:26.553 13.905 - 13.964: 46.7243% ( 396) 00:29:26.553 13.964 - 14.022: 51.2918% ( 396) 00:29:26.553 14.022 - 14.080: 54.7174% ( 297) 00:29:26.553 14.080 - 14.138: 57.6355% ( 253) 00:29:26.553 14.138 - 14.196: 59.7463% ( 183) 00:29:26.553 14.196 - 14.255: 61.5686% ( 158) 00:29:26.553 14.255 - 14.313: 63.2411% ( 145) 00:29:26.553 14.313 - 14.371: 64.6021% ( 118) 00:29:26.553 14.371 - 14.429: 65.6517% ( 91) 00:29:26.553 14.429 - 14.487: 66.5859% ( 81) 00:29:26.553 14.487 - 14.545: 67.4971% ( 79) 00:29:26.553 14.545 - 14.604: 68.5006% ( 87) 00:29:26.553 14.604 - 14.662: 69.3310% ( 72) 00:29:26.553 14.662 - 14.720: 70.1961% ( 75) 00:29:26.553 14.720 - 14.778: 71.0381% ( 73) 00:29:26.553 14.778 - 14.836: 71.6724% ( 55) 00:29:26.553 14.836 - 14.895: 72.0992% ( 37) 00:29:26.553 14.895 - 15.011: 72.9527% ( 74) 00:29:26.553 15.011 - 15.127: 74.1869% ( 107) 00:29:26.553 15.127 - 15.244: 75.6401% ( 126) 00:29:26.553 15.244 - 15.360: 77.0473% ( 122) 00:29:26.553 15.360 - 15.476: 77.8777% ( 72) 00:29:26.553 15.476 - 15.593: 78.7082% ( 72) 00:29:26.553 15.593 - 15.709: 79.4118% ( 61) 00:29:26.553 15.709 - 15.825: 79.7809% ( 32) 00:29:26.553 15.825 - 15.942: 80.1038% ( 28) 00:29:26.553 15.942 - 16.058: 80.2653% ( 14) 00:29:26.553 16.058 - 16.175: 80.4268% ( 14) 00:29:26.553 16.175 - 16.291: 80.5652% ( 12) 00:29:26.553 16.291 - 16.407: 80.6344% ( 6) 00:29:26.553 16.407 - 16.524: 80.7151% ( 7) 00:29:26.553 16.524 - 16.640: 80.7728% ( 5) 00:29:26.553 16.640 - 16.756: 80.8074% ( 3) 00:29:26.553 16.756 - 16.873: 80.8535% ( 4) 00:29:26.553 16.873 - 16.989: 80.8881% ( 3) 00:29:26.553 16.989 - 17.105: 80.9343% ( 4) 00:29:26.553 17.105 - 17.222: 80.9573% ( 2) 00:29:26.553 17.222 - 17.338: 80.9919% ( 3) 00:29:26.553 17.338 - 17.455: 81.0035% ( 1) 00:29:26.553 17.455 - 17.571: 81.0150% ( 1) 00:29:26.553 17.571 - 17.687: 81.0611% ( 4) 00:29:26.553 17.687 - 17.804: 81.0727% ( 1) 00:29:26.553 17.804 - 17.920: 81.1073% ( 3) 00:29:26.553 17.920 - 18.036: 81.1534% ( 4) 00:29:26.553 18.153 - 18.269: 81.1765% ( 2) 00:29:26.553 18.269 - 18.385: 81.2226% ( 4) 00:29:26.553 18.385 - 18.502: 81.2457% ( 2) 00:29:26.553 18.502 - 18.618: 81.3379% ( 8) 00:29:26.553 18.618 - 18.735: 81.4533% ( 10) 00:29:26.553 18.735 - 18.851: 81.6032% ( 13) 00:29:26.553 18.851 - 18.967: 81.7070% ( 9) 00:29:26.553 18.967 - 19.084: 81.8800% ( 15) 00:29:26.553 19.084 - 19.200: 82.0300% ( 13) 00:29:26.553 19.200 - 19.316: 82.1569% ( 11) 00:29:26.553 19.316 - 19.433: 82.2261% ( 6) 00:29:26.553 19.433 - 19.549: 82.3068% ( 7) 00:29:26.553 19.549 - 19.665: 82.3529% ( 4) 00:29:26.553 19.665 - 19.782: 82.4798% ( 11) 00:29:26.553 19.782 - 19.898: 82.5606% ( 7) 00:29:26.553 19.898 - 20.015: 82.6182% ( 5) 00:29:26.553 20.015 - 20.131: 82.7566% ( 12) 00:29:26.553 20.131 - 20.247: 82.9296% ( 15) 00:29:26.553 20.247 - 20.364: 83.0565% ( 11) 00:29:26.553 20.364 - 20.480: 83.2065% ( 13) 00:29:26.553 20.480 - 20.596: 83.4487% ( 21) 00:29:26.553 20.596 - 20.713: 83.5755% ( 11) 00:29:26.553 20.713 - 20.829: 83.6794% ( 9) 00:29:26.553 20.829 - 20.945: 83.7370% ( 5) 00:29:26.553 20.945 - 21.062: 83.8293% ( 8) 00:29:26.553 21.062 - 21.178: 83.9562% ( 11) 00:29:26.553 21.178 - 21.295: 84.0254% ( 6) 00:29:26.553 21.295 - 21.411: 84.0600% ( 3) 00:29:26.553 21.411 - 21.527: 84.1176% ( 5) 00:29:26.553 21.527 - 21.644: 84.1638% ( 4) 00:29:26.553 21.644 - 21.760: 84.2330% ( 6) 00:29:26.553 21.760 - 21.876: 84.3137% ( 7) 00:29:26.553 21.876 - 21.993: 84.3714% ( 5) 00:29:26.553 21.993 - 22.109: 84.4521% ( 7) 00:29:26.553 22.109 - 22.225: 84.4637% ( 1) 00:29:26.553 22.225 - 22.342: 84.4867% ( 2) 00:29:26.553 22.342 - 22.458: 84.5098% ( 2) 00:29:26.553 22.458 - 22.575: 84.5559% ( 4) 00:29:26.553 22.575 - 22.691: 84.6136% ( 5) 00:29:26.553 22.691 - 22.807: 84.6828% ( 6) 00:29:26.553 22.807 - 22.924: 84.7636% ( 7) 00:29:26.553 22.924 - 23.040: 84.7982% ( 3) 00:29:26.553 23.389 - 23.505: 84.8097% ( 1) 00:29:26.553 23.505 - 23.622: 84.8558% ( 4) 00:29:26.553 23.622 - 23.738: 84.8674% ( 1) 00:29:26.553 23.738 - 23.855: 84.8904% ( 2) 00:29:26.553 23.971 - 24.087: 84.9020% ( 1) 00:29:26.553 24.087 - 24.204: 84.9135% ( 1) 00:29:26.553 24.204 - 24.320: 84.9366% ( 2) 00:29:26.553 24.320 - 24.436: 84.9596% ( 2) 00:29:26.553 24.436 - 24.553: 84.9712% ( 1) 00:29:26.553 24.669 - 24.785: 85.0058% ( 3) 00:29:26.553 24.785 - 24.902: 85.0173% ( 1) 00:29:26.553 24.902 - 25.018: 85.0404% ( 2) 00:29:26.553 25.018 - 25.135: 85.0519% ( 1) 00:29:26.553 25.135 - 25.251: 85.0750% ( 2) 00:29:26.553 25.251 - 25.367: 85.0865% ( 1) 00:29:26.553 25.367 - 25.484: 85.1557% ( 6) 00:29:26.553 25.600 - 25.716: 85.1788% ( 2) 00:29:26.553 26.065 - 26.182: 85.1903% ( 1) 00:29:26.553 26.298 - 26.415: 85.2134% ( 2) 00:29:26.553 26.415 - 26.531: 85.2249% ( 1) 00:29:26.553 26.647 - 26.764: 85.2364% ( 1) 00:29:26.553 26.996 - 27.113: 85.2480% ( 1) 00:29:26.553 27.113 - 27.229: 85.2595% ( 1) 00:29:26.553 27.695 - 27.811: 85.2710% ( 1) 00:29:26.553 27.811 - 27.927: 85.3287% ( 5) 00:29:26.553 27.927 - 28.044: 85.4787% ( 13) 00:29:26.553 28.044 - 28.160: 85.7785% ( 26) 00:29:26.553 28.160 - 28.276: 86.3783% ( 52) 00:29:26.553 28.276 - 28.393: 87.3702% ( 86) 00:29:26.553 28.393 - 28.509: 88.8927% ( 132) 00:29:26.553 28.509 - 28.625: 90.6459% ( 152) 00:29:26.553 28.625 - 28.742: 92.7682% ( 184) 00:29:26.553 28.742 - 28.858: 94.4637% ( 147) 00:29:26.553 28.858 - 28.975: 95.3864% ( 80) 00:29:26.553 28.975 - 29.091: 96.0323% ( 56) 00:29:26.553 29.091 - 29.207: 96.5283% ( 43) 00:29:26.553 29.207 - 29.324: 97.0012% ( 41) 00:29:26.553 29.324 - 29.440: 97.3356% ( 29) 00:29:26.553 29.440 - 29.556: 97.6586% ( 28) 00:29:26.553 29.556 - 29.673: 97.9354% ( 24) 00:29:26.553 29.673 - 29.789: 98.1200% ( 16) 00:29:26.553 29.789 - 30.022: 98.3506% ( 20) 00:29:26.553 30.022 - 30.255: 98.4660% ( 10) 00:29:26.553 30.255 - 30.487: 98.5582% ( 8) 00:29:26.553 30.487 - 30.720: 98.6390% ( 7) 00:29:26.553 30.720 - 30.953: 98.7313% ( 8) 00:29:26.553 30.953 - 31.185: 98.7543% ( 2) 00:29:26.553 31.185 - 31.418: 98.8466% ( 8) 00:29:26.553 31.418 - 31.651: 98.8927% ( 4) 00:29:26.553 31.651 - 31.884: 98.9158% ( 2) 00:29:26.553 31.884 - 32.116: 98.9504% ( 3) 00:29:26.553 32.116 - 32.349: 98.9619% ( 1) 00:29:26.553 32.349 - 32.582: 98.9850% ( 2) 00:29:26.553 32.815 - 33.047: 98.9965% ( 1) 00:29:26.553 33.047 - 33.280: 99.0081% ( 1) 00:29:26.553 33.978 - 34.211: 99.0196% ( 1) 00:29:26.553 34.211 - 34.444: 99.0311% ( 1) 00:29:26.553 34.676 - 34.909: 99.0427% ( 1) 00:29:26.553 34.909 - 35.142: 99.0888% ( 4) 00:29:26.553 35.142 - 35.375: 99.1580% ( 6) 00:29:26.553 35.375 - 35.607: 99.2503% ( 8) 00:29:26.553 35.607 - 35.840: 99.2618% ( 1) 00:29:26.553 35.840 - 36.073: 99.2849% ( 2) 00:29:26.553 36.073 - 36.305: 99.3195% ( 3) 00:29:26.553 36.305 - 36.538: 99.3772% ( 5) 00:29:26.553 36.538 - 36.771: 99.4118% ( 3) 00:29:26.553 36.771 - 37.004: 99.4464% ( 3) 00:29:26.553 37.236 - 37.469: 99.4810% ( 3) 00:29:26.553 37.469 - 37.702: 99.4925% ( 1) 00:29:26.553 37.702 - 37.935: 99.5386% ( 4) 00:29:26.553 37.935 - 38.167: 99.5502% ( 1) 00:29:26.553 38.167 - 38.400: 99.5732% ( 2) 00:29:26.553 38.633 - 38.865: 99.5848% ( 1) 00:29:26.553 39.564 - 39.796: 99.6078% ( 2) 00:29:26.553 40.029 - 40.262: 99.6194% ( 1) 00:29:26.553 40.495 - 40.727: 99.6309% ( 1) 00:29:26.553 42.822 - 43.055: 99.6424% ( 1) 00:29:26.553 43.055 - 43.287: 99.6655% ( 2) 00:29:26.553 43.287 - 43.520: 99.6770% ( 1) 00:29:26.553 43.520 - 43.753: 99.7116% ( 3) 00:29:26.553 43.985 - 44.218: 99.7347% ( 2) 00:29:26.553 44.451 - 44.684: 99.7924% ( 5) 00:29:26.553 44.684 - 44.916: 99.8155% ( 2) 00:29:26.554 45.615 - 45.847: 99.8270% ( 1) 00:29:26.554 45.847 - 46.080: 99.8501% ( 2) 00:29:26.554 46.080 - 46.313: 99.8616% ( 1) 00:29:26.554 47.942 - 48.175: 99.8731% ( 1) 00:29:26.554 48.407 - 48.640: 99.8962% ( 2) 00:29:26.554 52.829 - 53.062: 99.9077% ( 1) 00:29:26.554 54.225 - 54.458: 99.9193% ( 1) 00:29:26.554 55.622 - 55.855: 99.9308% ( 1) 00:29:26.554 56.320 - 56.553: 99.9423% ( 1) 00:29:26.554 56.785 - 57.018: 99.9539% ( 1) 00:29:26.554 58.415 - 58.647: 99.9654% ( 1) 00:29:26.554 58.880 - 59.113: 99.9769% ( 1) 00:29:26.554 61.905 - 62.371: 99.9885% ( 1) 00:29:26.554 69.818 - 70.284: 100.0000% ( 1) 00:29:26.554 00:29:26.554 Complete histogram 00:29:26.554 ================== 00:29:26.554 Range in us Cumulative Count 00:29:26.554 8.902 - 8.960: 0.0115% ( 1) 00:29:26.554 8.960 - 9.018: 0.0346% ( 2) 00:29:26.554 9.018 - 9.076: 0.3922% ( 31) 00:29:26.554 9.076 - 9.135: 1.3264% ( 81) 00:29:26.554 9.135 - 9.193: 3.7255% ( 208) 00:29:26.554 9.193 - 9.251: 7.6355% ( 339) 00:29:26.554 9.251 - 9.309: 10.5421% ( 252) 00:29:26.554 9.309 - 9.367: 13.4948% ( 256) 00:29:26.554 9.367 - 9.425: 16.9896% ( 303) 00:29:26.554 9.425 - 9.484: 21.7416% ( 412) 00:29:26.554 9.484 - 9.542: 26.9204% ( 449) 00:29:26.554 9.542 - 9.600: 31.6955% ( 414) 00:29:26.554 9.600 - 9.658: 36.4706% ( 414) 00:29:26.554 9.658 - 9.716: 41.3495% ( 423) 00:29:26.554 9.716 - 9.775: 47.0588% ( 495) 00:29:26.554 9.775 - 9.833: 52.6528% ( 485) 00:29:26.554 9.833 - 9.891: 56.6551% ( 347) 00:29:26.554 9.891 - 9.949: 58.7543% ( 182) 00:29:26.554 9.949 - 10.007: 60.1384% ( 120) 00:29:26.554 10.007 - 10.065: 61.2341% ( 95) 00:29:26.554 10.065 - 10.124: 62.3183% ( 94) 00:29:26.554 10.124 - 10.182: 62.9988% ( 59) 00:29:26.554 10.182 - 10.240: 63.7140% ( 62) 00:29:26.554 10.240 - 10.298: 64.2099% ( 43) 00:29:26.554 10.298 - 10.356: 64.7751% ( 49) 00:29:26.554 10.356 - 10.415: 65.7555% ( 85) 00:29:26.554 10.415 - 10.473: 66.9896% ( 107) 00:29:26.554 10.473 - 10.531: 68.2814% ( 112) 00:29:26.554 10.531 - 10.589: 69.3426% ( 92) 00:29:26.554 10.589 - 10.647: 70.1615% ( 71) 00:29:26.554 10.647 - 10.705: 71.1995% ( 90) 00:29:26.554 10.705 - 10.764: 72.0992% ( 78) 00:29:26.554 10.764 - 10.822: 72.9988% ( 78) 00:29:26.554 10.822 - 10.880: 73.9446% ( 82) 00:29:26.554 10.880 - 10.938: 74.5905% ( 56) 00:29:26.554 10.938 - 10.996: 75.1096% ( 45) 00:29:26.554 10.996 - 11.055: 75.3864% ( 24) 00:29:26.554 11.055 - 11.113: 75.6978% ( 27) 00:29:26.554 11.113 - 11.171: 76.0438% ( 30) 00:29:26.554 11.171 - 11.229: 76.3206% ( 24) 00:29:26.554 11.229 - 11.287: 76.5167% ( 17) 00:29:26.554 11.287 - 11.345: 76.7359% ( 19) 00:29:26.554 11.345 - 11.404: 76.9781% ( 21) 00:29:26.554 11.404 - 11.462: 77.1511% ( 15) 00:29:26.554 11.462 - 11.520: 77.2318% ( 7) 00:29:26.554 11.520 - 11.578: 77.3241% ( 8) 00:29:26.554 11.578 - 11.636: 77.3933% ( 6) 00:29:26.554 11.636 - 11.695: 77.4048% ( 1) 00:29:26.554 11.695 - 11.753: 77.4394% ( 3) 00:29:26.554 11.753 - 11.811: 77.4740% ( 3) 00:29:26.554 11.811 - 11.869: 77.4971% ( 2) 00:29:26.554 11.869 - 11.927: 77.5433% ( 4) 00:29:26.554 11.927 - 11.985: 77.5894% ( 4) 00:29:26.554 11.985 - 12.044: 77.6240% ( 3) 00:29:26.554 12.044 - 12.102: 77.6586% ( 3) 00:29:26.554 12.102 - 12.160: 77.7393% ( 7) 00:29:26.554 12.160 - 12.218: 77.8085% ( 6) 00:29:26.554 12.218 - 12.276: 77.8547% ( 4) 00:29:26.554 12.276 - 12.335: 77.9008% ( 4) 00:29:26.554 12.335 - 12.393: 77.9354% ( 3) 00:29:26.554 12.393 - 12.451: 77.9700% ( 3) 00:29:26.554 12.451 - 12.509: 78.0046% ( 3) 00:29:26.554 12.509 - 12.567: 78.0392% ( 3) 00:29:26.554 12.567 - 12.625: 78.0738% ( 3) 00:29:26.554 12.625 - 12.684: 78.1084% ( 3) 00:29:26.554 12.684 - 12.742: 78.1315% ( 2) 00:29:26.554 12.742 - 12.800: 78.1661% ( 3) 00:29:26.554 12.800 - 12.858: 78.2122% ( 4) 00:29:26.554 12.858 - 12.916: 78.2353% ( 2) 00:29:26.554 12.975 - 13.033: 78.2468% ( 1) 00:29:26.554 13.033 - 13.091: 78.2699% ( 2) 00:29:26.554 13.091 - 13.149: 78.2814% ( 1) 00:29:26.554 13.149 - 13.207: 78.3045% ( 2) 00:29:26.554 13.207 - 13.265: 78.3160% ( 1) 00:29:26.554 13.265 - 13.324: 78.3391% ( 2) 00:29:26.554 13.324 - 13.382: 78.3737% ( 3) 00:29:26.554 13.382 - 13.440: 78.3852% ( 1) 00:29:26.554 13.440 - 13.498: 78.4198% ( 3) 00:29:26.554 13.498 - 13.556: 78.4314% ( 1) 00:29:26.554 13.556 - 13.615: 78.4544% ( 2) 00:29:26.554 13.615 - 13.673: 78.4660% ( 1) 00:29:26.554 13.673 - 13.731: 78.4775% ( 1) 00:29:26.554 13.731 - 13.789: 78.5006% ( 2) 00:29:26.554 13.789 - 13.847: 78.5236% ( 2) 00:29:26.554 13.905 - 13.964: 78.5352% ( 1) 00:29:26.554 14.138 - 14.196: 78.5467% ( 1) 00:29:26.554 14.196 - 14.255: 78.5698% ( 2) 00:29:26.554 14.313 - 14.371: 78.6044% ( 3) 00:29:26.554 14.371 - 14.429: 78.6159% ( 1) 00:29:26.554 14.429 - 14.487: 78.6275% ( 1) 00:29:26.554 14.604 - 14.662: 78.6390% ( 1) 00:29:26.554 14.895 - 15.011: 78.6851% ( 4) 00:29:26.554 15.011 - 15.127: 78.6967% ( 1) 00:29:26.554 15.244 - 15.360: 78.7082% ( 1) 00:29:26.554 15.360 - 15.476: 78.7197% ( 1) 00:29:26.554 15.476 - 15.593: 78.7428% ( 2) 00:29:26.554 15.709 - 15.825: 78.7659% ( 2) 00:29:26.554 15.825 - 15.942: 78.8927% ( 11) 00:29:26.554 15.942 - 16.058: 78.9619% ( 6) 00:29:26.554 16.058 - 16.175: 79.1119% ( 13) 00:29:26.554 16.175 - 16.291: 79.2272% ( 10) 00:29:26.554 16.291 - 16.407: 79.3080% ( 7) 00:29:26.554 16.407 - 16.524: 79.4810% ( 15) 00:29:26.554 16.524 - 16.640: 79.5963% ( 10) 00:29:26.554 16.640 - 16.756: 79.6886% ( 8) 00:29:26.554 16.756 - 16.873: 79.7347% ( 4) 00:29:26.554 16.873 - 16.989: 79.7924% ( 5) 00:29:26.554 16.989 - 17.105: 79.8155% ( 2) 00:29:26.554 17.105 - 17.222: 79.9308% ( 10) 00:29:26.554 17.222 - 17.338: 80.0115% ( 7) 00:29:26.554 17.338 - 17.455: 80.0807% ( 6) 00:29:26.554 17.455 - 17.571: 80.1384% ( 5) 00:29:26.554 17.571 - 17.687: 80.1730% ( 3) 00:29:26.554 17.687 - 17.804: 80.2307% ( 5) 00:29:26.554 17.804 - 17.920: 80.3230% ( 8) 00:29:26.554 17.920 - 18.036: 80.3806% ( 5) 00:29:26.554 18.036 - 18.153: 80.4037% ( 2) 00:29:26.554 18.153 - 18.269: 80.4498% ( 4) 00:29:26.554 18.269 - 18.385: 80.4960% ( 4) 00:29:26.554 18.385 - 18.502: 80.5190% ( 2) 00:29:26.554 18.502 - 18.618: 80.5421% ( 2) 00:29:26.554 18.618 - 18.735: 80.5882% ( 4) 00:29:26.554 18.735 - 18.851: 80.6228% ( 3) 00:29:26.554 18.851 - 18.967: 80.6459% ( 2) 00:29:26.554 18.967 - 19.084: 80.6805% ( 3) 00:29:26.554 19.084 - 19.200: 80.6920% ( 1) 00:29:26.554 19.200 - 19.316: 80.7036% ( 1) 00:29:26.554 19.549 - 19.665: 80.7151% ( 1) 00:29:26.554 19.665 - 19.782: 80.7266% ( 1) 00:29:26.554 19.782 - 19.898: 80.7382% ( 1) 00:29:26.554 20.131 - 20.247: 80.7497% ( 1) 00:29:26.554 20.364 - 20.480: 80.7612% ( 1) 00:29:26.554 20.480 - 20.596: 80.7843% ( 2) 00:29:26.554 20.596 - 20.713: 80.8074% ( 2) 00:29:26.554 20.713 - 20.829: 80.8189% ( 1) 00:29:26.554 20.829 - 20.945: 80.8420% ( 2) 00:29:26.554 20.945 - 21.062: 80.8766% ( 3) 00:29:26.554 21.062 - 21.178: 80.9227% ( 4) 00:29:26.554 21.178 - 21.295: 80.9573% ( 3) 00:29:26.554 21.295 - 21.411: 80.9804% ( 2) 00:29:26.554 23.389 - 23.505: 81.0035% ( 2) 00:29:26.554 23.505 - 23.622: 81.0957% ( 8) 00:29:26.554 23.622 - 23.738: 81.3033% ( 18) 00:29:26.554 23.738 - 23.855: 81.7070% ( 35) 00:29:26.554 23.855 - 23.971: 82.5260% ( 71) 00:29:26.554 23.971 - 24.087: 83.8754% ( 117) 00:29:26.554 24.087 - 24.204: 85.8593% ( 172) 00:29:26.554 24.204 - 24.320: 88.1661% ( 200) 00:29:26.554 24.320 - 24.436: 90.6228% ( 213) 00:29:26.554 24.436 - 24.553: 92.5375% ( 166) 00:29:26.554 24.553 - 24.669: 94.0484% ( 131) 00:29:26.554 24.669 - 24.785: 94.7982% ( 65) 00:29:26.554 24.785 - 24.902: 95.2826% ( 42) 00:29:26.554 24.902 - 25.018: 95.6978% ( 36) 00:29:26.554 25.018 - 25.135: 96.1592% ( 40) 00:29:26.554 25.135 - 25.251: 96.5167% ( 31) 00:29:26.554 25.251 - 25.367: 96.8166% ( 26) 00:29:26.554 25.367 - 25.484: 97.2088% ( 34) 00:29:26.554 25.484 - 25.600: 97.5202% ( 27) 00:29:26.554 25.600 - 25.716: 97.6817% ( 14) 00:29:26.554 25.716 - 25.833: 97.7509% ( 6) 00:29:26.554 25.833 - 25.949: 97.7970% ( 4) 00:29:26.554 25.949 - 26.065: 97.8316% ( 3) 00:29:26.554 26.065 - 26.182: 97.8547% ( 2) 00:29:26.554 26.182 - 26.298: 97.9123% ( 5) 00:29:26.554 26.298 - 26.415: 97.9700% ( 5) 00:29:26.554 26.415 - 26.531: 98.0392% ( 6) 00:29:26.554 26.531 - 26.647: 98.0623% ( 2) 00:29:26.554 26.764 - 26.880: 98.0854% ( 2) 00:29:26.554 26.996 - 27.113: 98.1315% ( 4) 00:29:26.554 27.113 - 27.229: 98.1430% ( 1) 00:29:26.554 27.229 - 27.345: 98.1546% ( 1) 00:29:26.554 27.345 - 27.462: 98.1892% ( 3) 00:29:26.554 27.462 - 27.578: 98.2122% ( 2) 00:29:26.554 27.695 - 27.811: 98.2353% ( 2) 00:29:26.554 27.927 - 28.044: 98.2584% ( 2) 00:29:26.554 28.044 - 28.160: 98.2814% ( 2) 00:29:26.554 28.160 - 28.276: 98.3045% ( 2) 00:29:26.554 28.509 - 28.625: 98.3160% ( 1) 00:29:26.554 28.858 - 28.975: 98.3276% ( 1) 00:29:26.554 28.975 - 29.091: 98.3391% ( 1) 00:29:26.554 29.324 - 29.440: 98.3622% ( 2) 00:29:26.554 29.440 - 29.556: 98.3737% ( 1) 00:29:26.554 29.789 - 30.022: 98.3852% ( 1) 00:29:26.554 30.022 - 30.255: 98.4198% ( 3) 00:29:26.554 30.255 - 30.487: 98.5006% ( 7) 00:29:26.554 30.487 - 30.720: 98.6275% ( 11) 00:29:26.554 30.720 - 30.953: 98.7889% ( 14) 00:29:26.554 30.953 - 31.185: 98.9273% ( 12) 00:29:26.554 31.185 - 31.418: 99.0081% ( 7) 00:29:26.554 31.418 - 31.651: 99.0888% ( 7) 00:29:26.554 31.651 - 31.884: 99.1696% ( 7) 00:29:26.554 31.884 - 32.116: 99.2157% ( 4) 00:29:26.554 32.116 - 32.349: 99.2734% ( 5) 00:29:26.554 32.349 - 32.582: 99.3310% ( 5) 00:29:26.554 32.582 - 32.815: 99.3426% ( 1) 00:29:26.554 32.815 - 33.047: 99.3541% ( 1) 00:29:26.554 33.047 - 33.280: 99.3772% ( 2) 00:29:26.554 33.280 - 33.513: 99.4118% ( 3) 00:29:26.554 33.513 - 33.745: 99.4233% ( 1) 00:29:26.554 33.745 - 33.978: 99.4348% ( 1) 00:29:26.554 34.211 - 34.444: 99.4579% ( 2) 00:29:26.554 34.676 - 34.909: 99.4925% ( 3) 00:29:26.554 34.909 - 35.142: 99.5156% ( 2) 00:29:26.554 35.375 - 35.607: 99.5271% ( 1) 00:29:26.554 35.607 - 35.840: 99.5502% ( 2) 00:29:26.554 37.236 - 37.469: 99.5617% ( 1) 00:29:26.554 38.400 - 38.633: 99.5848% ( 2) 00:29:26.554 38.633 - 38.865: 99.6194% ( 3) 00:29:26.554 38.865 - 39.098: 99.6424% ( 2) 00:29:26.554 39.098 - 39.331: 99.6655% ( 2) 00:29:26.554 39.331 - 39.564: 99.7347% ( 6) 00:29:26.554 40.029 - 40.262: 99.7578% ( 2) 00:29:26.554 40.727 - 40.960: 99.7693% ( 1) 00:29:26.554 41.193 - 41.425: 99.7809% ( 1) 00:29:26.554 41.658 - 41.891: 99.8039% ( 2) 00:29:26.554 41.891 - 42.124: 99.8385% ( 3) 00:29:26.554 42.124 - 42.356: 99.8501% ( 1) 00:29:26.554 45.615 - 45.847: 99.8616% ( 1) 00:29:26.554 46.313 - 46.545: 99.8731% ( 1) 00:29:26.554 46.778 - 47.011: 99.8962% ( 2) 00:29:26.554 49.105 - 49.338: 99.9077% ( 1) 00:29:26.554 49.571 - 49.804: 99.9193% ( 1) 00:29:26.554 50.036 - 50.269: 99.9308% ( 1) 00:29:26.554 51.433 - 51.665: 99.9423% ( 1) 00:29:26.554 51.898 - 52.131: 99.9539% ( 1) 00:29:26.554 52.131 - 52.364: 99.9654% ( 1) 00:29:26.554 52.829 - 53.062: 99.9769% ( 1) 00:29:26.554 54.225 - 54.458: 99.9885% ( 1) 00:29:26.554 54.924 - 55.156: 100.0000% ( 1) 00:29:26.554 00:29:26.554 00:29:26.554 real 0m1.302s 00:29:26.554 user 0m1.121s 00:29:26.554 sys 0m0.141s 00:29:26.554 05:26:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.554 05:26:45 -- common/autotest_common.sh@10 -- # set +x 00:29:26.554 ************************************ 00:29:26.554 END TEST nvme_overhead 00:29:26.554 ************************************ 00:29:26.554 05:26:45 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:29:26.554 05:26:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:29:26.554 05:26:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:26.554 05:26:45 -- common/autotest_common.sh@10 -- # set +x 00:29:26.554 ************************************ 00:29:26.554 START TEST nvme_arbitration 00:29:26.554 ************************************ 00:29:26.554 05:26:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:29:29.846 Initializing NVMe Controllers 00:29:29.846 Attached to 0000:00:06.0 00:29:29.846 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:29:29.846 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:29:29.846 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:29:29.846 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:29:29.846 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:29:29.846 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:29:29.846 Initialization complete. Launching workers. 00:29:29.846 Starting thread on core 1 with urgent priority queue 00:29:29.846 Starting thread on core 2 with urgent priority queue 00:29:29.846 Starting thread on core 3 with urgent priority queue 00:29:29.846 Starting thread on core 0 with urgent priority queue 00:29:29.846 QEMU NVMe Ctrl (12340 ) core 0: 1429.33 IO/s 69.96 secs/100000 ios 00:29:29.846 QEMU NVMe Ctrl (12340 ) core 1: 1216.00 IO/s 82.24 secs/100000 ios 00:29:29.846 QEMU NVMe Ctrl (12340 ) core 2: 746.67 IO/s 133.93 secs/100000 ios 00:29:29.846 QEMU NVMe Ctrl (12340 ) core 3: 618.67 IO/s 161.64 secs/100000 ios 00:29:29.846 ======================================================== 00:29:29.846 00:29:29.846 00:29:29.846 real 0m3.467s 00:29:29.846 user 0m9.468s 00:29:29.846 sys 0m0.158s 00:29:29.846 05:26:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.846 05:26:48 -- common/autotest_common.sh@10 -- # set +x 00:29:29.846 ************************************ 00:29:29.846 END TEST nvme_arbitration 00:29:29.846 ************************************ 00:29:29.846 05:26:48 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:29:29.846 05:26:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:29.846 05:26:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:29.846 05:26:48 -- common/autotest_common.sh@10 -- # set +x 00:29:29.846 ************************************ 00:29:29.846 START TEST nvme_single_aen 00:29:29.846 ************************************ 00:29:29.846 05:26:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:29:29.846 [2024-07-26 05:26:48.946083] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:29.846 [2024-07-26 05:26:48.946176] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.105 [2024-07-26 05:26:49.109980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:30.105 Asynchronous Event Request test 00:29:30.105 Attached to 0000:00:06.0 00:29:30.105 Reset controller to setup AER completions for this process 00:29:30.105 Registering asynchronous event callbacks... 00:29:30.105 Getting orig temperature thresholds of all controllers 00:29:30.105 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:29:30.105 Setting all controllers temperature threshold low to trigger AER 00:29:30.105 Waiting for all controllers temperature threshold to be set lower 00:29:30.105 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:29:30.105 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:29:30.105 Waiting for all controllers to trigger AER and reset threshold 00:29:30.105 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:29:30.105 Cleaning up... 00:29:30.105 00:29:30.105 real 0m0.238s 00:29:30.105 user 0m0.083s 00:29:30.105 sys 0m0.114s 00:29:30.105 05:26:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:30.105 05:26:49 -- common/autotest_common.sh@10 -- # set +x 00:29:30.105 ************************************ 00:29:30.105 END TEST nvme_single_aen 00:29:30.105 ************************************ 00:29:30.105 05:26:49 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:29:30.105 05:26:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:30.105 05:26:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:30.105 05:26:49 -- common/autotest_common.sh@10 -- # set +x 00:29:30.105 ************************************ 00:29:30.105 START TEST nvme_doorbell_aers 00:29:30.105 ************************************ 00:29:30.105 05:26:49 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:29:30.105 05:26:49 -- nvme/nvme.sh@70 -- # bdfs=() 00:29:30.105 05:26:49 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:29:30.105 05:26:49 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:29:30.105 05:26:49 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:29:30.105 05:26:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:30.105 05:26:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:30.105 05:26:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:30.105 05:26:49 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:30.105 05:26:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:30.364 05:26:49 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:30.364 05:26:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:30.364 05:26:49 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:29:30.364 05:26:49 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:30.623 [2024-07-26 05:26:49.484084] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93129) is not found. Dropping the request. 00:29:40.598 Executing: test_write_invalid_db 00:29:40.598 Waiting for AER completion... 00:29:40.598 Failure: test_write_invalid_db 00:29:40.599 00:29:40.599 Executing: test_invalid_db_write_overflow_sq 00:29:40.599 Waiting for AER completion... 00:29:40.599 Failure: test_invalid_db_write_overflow_sq 00:29:40.599 00:29:40.599 Executing: test_invalid_db_write_overflow_cq 00:29:40.599 Waiting for AER completion... 00:29:40.599 Failure: test_invalid_db_write_overflow_cq 00:29:40.599 00:29:40.599 00:29:40.599 real 0m10.087s 00:29:40.599 user 0m8.664s 00:29:40.599 sys 0m1.370s 00:29:40.599 05:26:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.599 05:26:59 -- common/autotest_common.sh@10 -- # set +x 00:29:40.599 ************************************ 00:29:40.599 END TEST nvme_doorbell_aers 00:29:40.599 ************************************ 00:29:40.599 05:26:59 -- nvme/nvme.sh@97 -- # uname 00:29:40.599 05:26:59 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:29:40.599 05:26:59 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:29:40.599 05:26:59 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:29:40.599 05:26:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:40.599 05:26:59 -- common/autotest_common.sh@10 -- # set +x 00:29:40.599 ************************************ 00:29:40.599 START TEST nvme_multi_aen 00:29:40.599 ************************************ 00:29:40.599 05:26:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:29:40.599 [2024-07-26 05:26:59.381315] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:40.599 [2024-07-26 05:26:59.381445] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.599 [2024-07-26 05:26:59.592742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:40.599 [2024-07-26 05:26:59.592816] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93129) is not found. Dropping the request. 00:29:40.599 [2024-07-26 05:26:59.593491] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93129) is not found. Dropping the request. 00:29:40.599 [2024-07-26 05:26:59.593699] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93129) is not found. Dropping the request. 00:29:40.599 [2024-07-26 05:26:59.603533] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:40.599 Child process pid: 93301 00:29:40.599 [2024-07-26 05:26:59.603748] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.883 ================================================================= 00:29:40.883 ==93300==ERROR: AddressSanitizer: heap-use-after-free on address 0x2000f97ee260 at pc 0x555555da63f8 bp 0x7fffffffcb60 sp 0x7fffffffcb50 00:29:40.883 WRITE of size 8 at 0x2000f97ee260 thread T0 00:29:41.151 #0 0x555555da63f7 in malloc_elem_free_list_remove ../lib/eal/common/malloc_elem.c:418 00:29:41.151 #1 0x555555da652a in malloc_elem_alloc ../lib/eal/common/malloc_elem.c:437 00:29:41.151 #2 0x555555da8885 in heap_alloc ../lib/eal/common/malloc_heap.c:246 00:29:41.151 #3 0x555555daa467 in malloc_heap_alloc_on_heap_id ../lib/eal/common/malloc_heap.c:682 00:29:41.151 #4 0x555555daa6a9 in malloc_heap_alloc ../lib/eal/common/malloc_heap.c:757 00:29:41.151 #5 0x555555dae2a2 in malloc_socket ../lib/eal/common/rte_malloc.c:72 00:29:41.151 #6 0x555555daea72 in rte_malloc_socket ../lib/eal/common/rte_malloc.c:87 00:29:41.151 #7 0x555555daebc3 in rte_zmalloc_socket ../lib/eal/common/rte_malloc.c:111 00:29:41.151 #8 0x555555ca28e6 in spdk_zmalloc /home/vagrant/spdk_repo/spdk/lib/env_dpdk/env.c:42 00:29:41.151 #9 0x555555a8a855 in nvme_ctrlr_queue_async_event /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3138 00:29:41.151 #10 0x555555a8bec5 in nvme_ctrlr_async_event_cb /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3196 00:29:41.151 #11 0x555555ad275a in nvme_complete_request /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_internal.h:1430 00:29:41.151 #12 0x555555ae00e2 in nvme_pcie_qpair_complete_tracker /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:706 00:29:41.151 #13 0x555555ae30f8 in nvme_pcie_qpair_process_completions /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:925 00:29:41.151 #14 0x555555b2b88d in nvme_transport_qpair_process_completions /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_transport.c:615 00:29:41.151 #15 0x555555b07741 in spdk_nvme_qpair_process_completions /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c:799 00:29:41.151 #16 0x555555a9e870 in spdk_nvme_ctrlr_process_admin_completions /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4371 00:29:41.151 #17 0x555555a166fc in spdk_aer_temperature_test /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer.c:464 00:29:41.151 #18 0x555555a18c65 in main /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer.c:675 00:29:41.151 #19 0x7ffff662a1c9 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 00:29:41.151 #20 0x7ffff662a28a in __libc_start_main_impl ../csu/libc-start.c:360 00:29:41.151 #21 0x555555a11614 in _start (/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer+0x4bd614) (BuildId: 9143c20cf5ad0edc29d8d1eba91eb09cd21caca8) 00:29:41.151 00:29:41.151 Address 0x2000f97ee260 is a wild pointer inside of access range of size 0x000000000008. 00:29:41.151 SUMMARY: AddressSanitizer: heap-use-after-free ../lib/eal/common/malloc_elem.c:418 in malloc_elem_free_list_remove 00:29:41.151 Shadow bytes around the buggy address: 00:29:41.151 0x2000f97edf80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:41.151 0x2000f97ee000: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:41.151 0x2000f97ee080: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:41.151 0x2000f97ee100: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:41.151 0x2000f97ee180: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:41.151 =>0x2000f97ee200: fd fd fd fd fd fd fd fd fd fd fd fd[fd]fd fd fd 00:29:41.151 0x2000f97ee280: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:41.151 0x2000f97ee300: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:41.151 0x2000f97ee380: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:41.151 0x2000f97ee400: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:41.151 0x2000f97ee480: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:41.151 Shadow byte legend (one shadow byte represents 8 application bytes): 00:29:41.151 Addressable: 00 00:29:41.151 Partially addressable: 01 02 03 04 05 06 07 00:29:41.151 Heap left redzone: fa 00:29:41.151 Freed heap region: fd 00:29:41.151 Stack left redzone: f1 00:29:41.151 Stack mid redzone: f2 00:29:41.151 Stack right redzone: f3 00:29:41.151 Stack after return: f5 00:29:41.151 Stack use after scope: f8 00:29:41.151 Global redzone: f9 00:29:41.151 Global init order: f6 00:29:41.151 Poisoned by user: f7 00:29:41.151 Container overflow: fc 00:29:41.151 Array cookie: ac 00:29:41.151 Intra object redzone: bb 00:29:41.151 ASan internal: fe 00:29:41.151 Left alloca redzone: ca 00:29:41.151 Right alloca redzone: cb 00:29:41.151 ==93300==ABORTING 00:29:47.762 ================================================================= 00:29:47.762 ==92756==ERROR: AddressSanitizer: heap-use-after-free on address 0x2000f97f2e84 at pc 0x555555bd1a1c bp 0x7fffffffca20 sp 0x7fffffffca10 00:29:47.762 READ of size 4 at 0x2000f97f2e84 thread T0 (reactor_1) 00:29:47.762 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1076: 93300 Aborted (core dumped) "$@" 00:29:47.762 05:27:06 -- common/autotest_common.sh@1104 -- # trap - ERR 00:29:47.762 05:27:06 -- common/autotest_common.sh@1104 -- # print_backtrace 00:29:47.762 05:27:06 -- common/autotest_common.sh@1132 -- # [[ ehxBET =~ e ]] 00:29:47.762 05:27:06 -- common/autotest_common.sh@1134 -- # args=('log' '-L' '0' '-i' '-T' '-m' '/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer' 'nvme_multi_aen') 00:29:47.762 05:27:06 -- common/autotest_common.sh@1134 -- # local args 00:29:47.762 05:27:06 -- common/autotest_common.sh@1136 -- # xtrace_disable 00:29:47.762 05:27:06 -- common/autotest_common.sh@10 -- # set +x 00:29:47.762 ========== Backtrace start: ========== 00:29:47.762 00:29:47.762 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1104 -> run_test(["nvme_multi_aen"],["/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer"],["-m"],["-T"],["-i"],["0"],["-L"],["log"]) 00:29:47.762 ... 00:29:47.762 1099 timing_enter $test_name 00:29:47.762 1100 echo "************************************" 00:29:47.762 1101 echo "START TEST $test_name" 00:29:47.762 1102 echo "************************************" 00:29:47.762 1103 xtrace_restore 00:29:47.762 1104 time "$@" 00:29:47.762 1105 xtrace_disable 00:29:47.762 1106 echo "************************************" 00:29:47.762 1107 echo "END TEST $test_name" 00:29:47.762 1108 echo "************************************" 00:29:47.762 1109 timing_exit $test_name 00:29:47.762 ... 00:29:47.762 in /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh:98 -> main([]) 00:29:47.762 ... 00:29:47.762 93 run_test "nvme_arbitration" $SPDK_EXAMPLE_DIR/arbitration -t 3 -i 0 00:29:47.762 94 run_test "nvme_single_aen" $testdir/aer/aer -T -i 0 -L log 00:29:47.763 95 run_test "nvme_doorbell_aers" nvme_doorbell_aers 00:29:47.763 96 00:29:47.763 97 if [ $(uname) != "FreeBSD" ]; then 00:29:47.763 => 98 run_test "nvme_multi_aen" $testdir/aer/aer -m -T -i 0 -L log 00:29:47.763 99 run_test "nvme_startup" $testdir/startup/startup -t 1000000 00:29:47.763 100 run_test "nvme_multi_secondary" nvme_multi_secondary 00:29:47.763 101 trap - SIGINT SIGTERM EXIT 00:29:47.763 102 kill_stub 00:29:47.763 103 fi 00:29:47.763 ... 00:29:47.763 00:29:47.763 ========== Backtrace end ========== 00:29:47.763 05:27:06 -- common/autotest_common.sh@1173 -- # return 0 00:29:47.763 00:29:47.763 real 0m7.284s 00:29:47.763 user 0m0.254s 00:29:47.763 sys 0m0.314s 00:29:47.763 05:27:06 -- common/autotest_common.sh@1 -- # kill_stub -9 00:29:47.763 05:27:06 -- common/autotest_common.sh@1065 -- # [[ -e /proc/92756 ]] 00:29:47.763 05:27:06 -- common/autotest_common.sh@1066 -- # kill -9 92756 00:29:47.763 05:27:06 -- common/autotest_common.sh@1067 -- # wait 92756 00:29:47.763 05:27:06 -- common/autotest_common.sh@1068 -- # : 00:29:47.763 05:27:06 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:29:47.763 05:27:06 -- common/autotest_common.sh@1073 -- # echo 2 00:29:47.763 05:27:06 -- common/autotest_common.sh@1 -- # exit 1 00:29:47.763 05:27:06 -- common/autotest_common.sh@1104 -- # trap - ERR 00:29:47.763 05:27:06 -- common/autotest_common.sh@1104 -- # print_backtrace 00:29:47.763 05:27:06 -- common/autotest_common.sh@1132 -- # [[ ehxBET =~ e ]] 00:29:47.763 05:27:06 -- common/autotest_common.sh@1134 -- # args=('/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh' 'nvme' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:29:47.763 05:27:06 -- common/autotest_common.sh@1134 -- # local args 00:29:47.763 05:27:06 -- common/autotest_common.sh@1136 -- # xtrace_disable 00:29:47.763 05:27:06 -- common/autotest_common.sh@10 -- # set +x 00:29:47.763 ========== Backtrace start: ========== 00:29:47.763 00:29:47.763 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1104 -> run_test(["nvme"],["/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh"]) 00:29:47.763 ... 00:29:47.763 1099 timing_enter $test_name 00:29:47.763 1100 echo "************************************" 00:29:47.763 1101 echo "START TEST $test_name" 00:29:47.763 1102 echo "************************************" 00:29:47.763 1103 xtrace_restore 00:29:47.763 1104 time "$@" 00:29:47.763 1105 xtrace_disable 00:29:47.763 1106 echo "************************************" 00:29:47.763 1107 echo "END TEST $test_name" 00:29:47.763 1108 echo "************************************" 00:29:47.763 1109 timing_exit $test_name 00:29:47.763 ... 00:29:47.763 in /home/vagrant/spdk_repo/spdk/autotest.sh:222 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:29:47.763 ... 00:29:47.763 217 if [ $SPDK_TEST_NVME -eq 1 ]; then 00:29:47.763 218 run_test "blockdev_nvme" $rootdir/test/bdev/blockdev.sh "nvme" 00:29:47.763 219 if [[ $(uname -s) == Linux ]]; then 00:29:47.763 220 run_test "blockdev_nvme_gpt" $rootdir/test/bdev/blockdev.sh "gpt" 00:29:47.763 221 fi 00:29:47.763 => 222 run_test "nvme" $rootdir/test/nvme/nvme.sh 00:29:47.763 223 if [[ $SPDK_TEST_NVME_PMR -eq 1 ]]; then 00:29:47.763 224 run_test "nvme_pmr" $rootdir/test/nvme/nvme_pmr.sh 00:29:47.763 225 fi 00:29:47.763 226 00:29:47.763 227 run_test "nvme_scc" $rootdir/test/nvme/nvme_scc.sh 00:29:47.763 ... 00:29:47.763 00:29:47.763 ========== Backtrace end ========== 00:29:47.763 05:27:06 -- common/autotest_common.sh@1173 -- # return 0 00:29:47.763 00:29:47.763 real 0m31.224s 00:29:47.763 user 1m15.422s 00:29:47.763 sys 0m8.303s 00:29:47.763 05:27:06 -- common/autotest_common.sh@1 -- # autotest_cleanup 00:29:47.763 05:27:06 -- common/autotest_common.sh@1371 -- # local autotest_es=1 00:29:47.763 05:27:06 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:29:47.763 05:27:06 -- common/autotest_common.sh@10 -- # set +x 00:29:57.741 ##### CORE BT aer_93300.core.bt.txt ##### 00:29:57.741 00:29:57.741 gdb: warning: Couldn't determine a path for the index cache directory. 00:29:57.741 00:29:57.741 warning: Can't open file /dev/shm/sem.HnFwIv (deleted) during file-backed mapping note processing 00:29:57.741 00:29:57.741 warning: Can't open file /dev/shm/sem.pcqkvV (deleted) during file-backed mapping note processing 00:29:57.741 [New LWP 93300] 00:29:57.741 [New LWP 93303] 00:29:57.741 [New LWP 93302] 00:29:57.741 00:29:57.741 warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/librdmacm.so.1 00:29:57.741 [Thread debugging using libthread_db enabled] 00:29:57.741 Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 00:29:57.741 Core was generated by `/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log'. 00:29:57.741 Program terminated with signal SIGABRT, Aborted. 00:29:57.741 #0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=) at ./nptl/pthread_kill.c:44 00:29:57.741 00:29:57.741 warning: 44 ./nptl/pthread_kill.c: No such file or directory 00:29:57.741 [Current thread is 1 (Thread 0x7ffff7727a80 (LWP 93300))] 00:29:57.741 00:29:57.741 Thread 3 (Thread 0x7ffff30006c0 (LWP 93302)): 00:29:57.741 #0 0x00007ffff672a042 in epoll_wait (epfd=5, events=0x7ffff2ffe8e0, maxevents=1, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 00:29:57.741 sc_ret = -4 00:29:57.741 sc_cancel_oldtype = 0 00:29:57.741 sc_ret = 00:29:57.741 #1 0x0000555555dedc36 in eal_intr_handle_interrupts (pfd=5, totalfds=1) at ../lib/eal/linux/eal_interrupts.c:1077 00:29:57.741 events = {{events = 4076860480, data = {ptr = 0x55dd779600007fff, fd = 32767, u32 = 32767, u64 = 6187232949205762047}}} 00:29:57.741 nfds = 0 00:29:57.741 #2 0x0000555555dee13e in eal_intr_thread_main (arg=0x0) at ../lib/eal/linux/eal_interrupts.c:1163 00:29:57.741 pipe_event = {events = 3, data = {ptr = 0x3, fd = 3, u32 = 3, u64 = 3}} 00:29:57.741 src = 0x0 00:29:57.741 numfds = 1 00:29:57.741 pfd = 5 00:29:57.741 __func__ = "eal_intr_thread_main" 00:29:57.741 #3 0x0000555555da0269 in control_thread_start (arg=0x50300002b3f0) at ../lib/eal/common/eal_common_thread.c:282 00:29:57.741 params = 0x50300002b3f0 00:29:57.741 start_arg = 0x0 00:29:57.741 start_routine = 0x555555dedd14 00:29:57.741 #4 0x0000555555dd799d in thread_start_wrapper (arg=0x7ffff43094a0) at ../lib/eal/unix/rte_thread.c:112 00:29:57.741 ctx = 0x7ffff43094a0 00:29:57.741 thread_func = 0x555555da01cc 00:29:57.741 thread_args = 0x50300002b3f0 00:29:57.741 ret = 0 00:29:57.741 #5 0x00007ffff785e10a in asan_thread_start (arg=0x7ffff6812000) at ../../../../src/libsanitizer/asan/asan_interceptors.cpp:234 00:29:57.741 t = 0x7ffff6812000 00:29:57.741 self = 140737270253248 00:29:57.741 args = {routine = 0x555555dd7796 , arg_retval = 0x7ffff43094a0} 00:29:57.741 sigset = {val = {0, 140737327463836, 140737287196453, 140737287196457, 140737287196464, 0, 140733193388034, 11, 50, 140737346865572, 91396904550400, 91396904552432, 91396904550400, 80923, 140737488340592, 11}} 00:29:57.741 retval = 00:29:57.741 #6 0x00007ffff669ca94 in start_thread (arg=) at ./nptl/pthread_create.c:447 00:29:57.741 ret = 00:29:57.741 pd = 00:29:57.741 out = 00:29:57.741 unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737270253248, 8197147354853834121, 140737270253248, -5016, 11, 140737488339568, 8197147355076132233, 8197139812162108809}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} 00:29:57.741 not_first_call = 00:29:57.741 #7 0x00007ffff6729c3c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 00:29:57.741 No locals. 00:29:57.741 00:29:57.741 Thread 2 (Thread 0x7ffff1a006c0 (LWP 93303)): 00:29:57.741 #0 0x00007ffff672be3b in __recvmsg_syscall (flags=0, msg=0x7ffff0809050, fd=8) at ../sysdeps/unix/sysv/linux/recvmsg.c:27 00:29:57.741 sc_ret = -512 00:29:57.741 sc_cancel_oldtype = 0 00:29:57.741 sc_ret = 00:29:57.741 #1 __libc_recvmsg (fd=8, msg=0x7ffff0809050, flags=0) at ../sysdeps/unix/sysv/linux/recvmsg.c:41 00:29:57.741 r = 00:29:57.741 #2 0x00007ffff78d6ac0 in ___interceptor_recvmsg (fd=8, msg=0x7ffff0809050, flags=0) at ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:3129 00:29:57.741 ctx = 0x7ffff19fe5c0 00:29:57.741 _ctx = {interceptor_name = 0x7ffff7959900 "recvmsg"} 00:29:57.741 res = 00:29:57.741 #3 0x0000555555dc02cc in read_msg (fd=8, m=0x7ffff0a090b0, s=0x7ffff0a09020) at ../lib/eal/common/eal_common_proc.c:284 00:29:57.741 msglen = 0 00:29:57.741 iov = {iov_base = 0x7ffff0a090b0, iov_len = 332} 00:29:57.741 msgh = {msg_name = 0x7ffff0a09020, msg_namelen = 110, msg_iov = 0x7ffff0809030, msg_iovlen = 1, msg_control = 0x7ffff08090b0, msg_controllen = 48, msg_flags = 0} 00:29:57.741 control = '\000' 00:29:57.741 cmsg = 0x0 00:29:57.741 buflen = 332 00:29:57.741 #4 0x0000555555dc0f7b in mp_handle (arg=0x0) at ../lib/eal/common/eal_common_proc.c:410 00:29:57.741 ret = 0 00:29:57.741 msg = {type = 0, msg = {name = '\000' , len_param = 0, num_fds = 0, param = '\000' , fds = {0, 0, 0, 0, 0, 0, 0, 0}}} 00:29:57.741 sa = {sun_family = 0, sun_path = '\000' } 00:29:57.741 fd = 8 00:29:57.742 #5 0x0000555555da0269 in control_thread_start (arg=0x50300002b420) at ../lib/eal/common/eal_common_thread.c:282 00:29:57.742 params = 0x50300002b420 00:29:57.742 start_arg = 0x0 00:29:57.742 start_routine = 0x555555dc0e9b 00:29:57.742 #6 0x0000555555dd799d in thread_start_wrapper (arg=0x7ffff43096a0) at ../lib/eal/unix/rte_thread.c:112 00:29:57.742 ctx = 0x7ffff43096a0 00:29:57.742 thread_func = 0x555555da01cc 00:29:57.742 thread_args = 0x50300002b420 00:29:57.742 ret = 0 00:29:57.742 #7 0x00007ffff785e10a in asan_thread_start (arg=0x7ffff64a2000) at ../../../../src/libsanitizer/asan/asan_interceptors.cpp:234 00:29:57.742 t = 0x7ffff64a2000 00:29:57.742 self = 140737247184576 00:29:57.742 args = {routine = 0x555555dd7796 , arg_retval = 0x7ffff43096a0} 00:29:57.742 sigset = {val = {0, 140737327463836, 140737287196517, 140737287196523, 140737287196528, 0, 140733193388034, 17, 140737337675984, 2015621223825115136, 89404039286656, 2015621223825115136, 140737488338288, 140737289177650, 140737488338304, 2015621223825115136}} 00:29:57.742 retval = 00:29:57.742 #8 0x00007ffff669ca94 in start_thread (arg=) at ./nptl/pthread_create.c:447 00:29:57.742 ret = 00:29:57.742 pd = 00:29:57.742 out = 00:29:57.742 unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737247184576, 8197141582417788297, 140737247184576, -5016, 2, 140737488335264, 8197141582640086409, 8197139812162108809}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} 00:29:57.742 not_first_call = 00:29:57.742 #9 0x00007ffff6729c3c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 00:29:57.742 No locals. 00:29:57.742 00:29:57.742 Thread 1 (Thread 0x7ffff7727a80 (LWP 93300)): 00:29:57.742 #0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=) at ./nptl/pthread_kill.c:44 00:29:57.742 tid = 00:29:57.742 ret = 0 00:29:57.742 pd = 00:29:57.742 old_mask = {__val = {0}} 00:29:57.742 ret = 00:29:57.742 pd = 00:29:57.742 old_mask = 00:29:57.742 ret = 00:29:57.742 tid = 00:29:57.742 ret = 00:29:57.742 resultvar = 00:29:57.742 resultvar = 00:29:57.742 __arg3 = 00:29:57.742 __arg2 = 00:29:57.742 __arg1 = 00:29:57.742 _a3 = 00:29:57.742 _a2 = 00:29:57.742 _a1 = 00:29:57.742 __futex = 00:29:57.742 resultvar = 00:29:57.742 __arg3 = 00:29:57.742 __arg2 = 00:29:57.742 __arg1 = 00:29:57.742 _a3 = 00:29:57.742 _a2 = 00:29:57.742 _a1 = 00:29:57.742 __futex = 00:29:57.742 __private = 00:29:57.742 __oldval = 00:29:57.742 #1 __pthread_kill_internal (signo=6, threadid=) at ./nptl/pthread_kill.c:78 00:29:57.742 No locals. 00:29:57.742 #2 __GI___pthread_kill (threadid=, signo=signo@entry=6) at ./nptl/pthread_kill.c:89 00:29:57.742 No locals. 00:29:57.742 #3 0x00007ffff664526e in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26 00:29:57.742 ret = 00:29:57.742 #4 0x00007ffff66288ff in __GI_abort () at ./stdlib/abort.c:79 00:29:57.742 save_stage = 1 00:29:57.742 act = {__sigaction_handler = {sa_handler = 0x20, sa_sigaction = 0x20}, sa_mask = {__val = {0 }}, sa_flags = 0, sa_restorer = 0x0} 00:29:57.742 #5 0x00007ffff791cc10 in __sanitizer::Abort () at ../../../../src/libsanitizer/sanitizer_common/sanitizer_posix_libcdep.cpp:143 00:29:57.742 No locals. 00:29:57.742 #6 0x00007ffff792cdec in __sanitizer::Die () at ../../../../src/libsanitizer/sanitizer_common/sanitizer_termination.cpp:58 00:29:57.742 No locals. 00:29:57.742 #7 0x00007ffff790643d in __asan::ScopedInErrorReport::~ScopedInErrorReport (this=0x7fffffffbee6, __in_chrg=) at ../../../../src/libsanitizer/asan/asan_report.cpp:192 00:29:57.742 buffer_copy = {<__sanitizer::InternalMmapVectorNoCtor> = {data_ = 0x7fffee1c1000 '=' , "\n==93300==ERROR: AddressSanitizer: heap-use-after-free on address 0x2000f97ee260 at pc 0x555555da63f8 bp 0x7fffffffcb60 sp 0x7fffffffcb"..., capacity_bytes_ = 65536, size_ = 65536}, } 00:29:57.742 buffer_copy = 00:29:57.742 l = 00:29:57.742 #8 0x00007ffff7905a3d in __asan::ReportGenericError (pc=93825000956920, bp=140737488341856, sp=sp@entry=140737488341840, addr=35188557931104, is_write=is_write@entry=true, access_size=8, fatal=true, exp=) at ../../../../src/libsanitizer/asan/asan_report.cpp:497 00:29:57.744 in_report = {error_report_lock_ = {}, static current_error_ = {kind = __asan::kErrorKindGeneric, {Base = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, DeadlySignal = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, signal = {siginfo = 0x7fff00000000, context = 0x2000f97ee260, addr = 8, pc = 140737488340192, sp = 0, bp = 140737488339952, is_memory_access = 209, write_flag = (__sanitizer::SignalContext::Read | __sanitizer::SignalContext::Write | unknown: 0x7ffc), is_true_faulting_addr = false}}, DoubleFree = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, second_free_stack = 0x7fff00000000, addr_description = {addr = 35188557931104, alloc_tid = 8, free_tid = 140737488340192, alloc_stack_id = 0, free_stack_id = 0, chunk_access = {bad_addr = 140737488339952, offset = 140737327463633, chunk_begin = 140737488340224, chunk_size = 140737488340370, user_requested_alignment = 2272, access_type = 0, alloc_type = 3}}}, NewDeleteTypeMismatch = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, free_stack = 0x7fff00000000, addr_description = {addr = 35188557931104, alloc_tid = 8, free_tid = 140737488340192, alloc_stack_id = 0, free_stack_id = 0, chunk_access = {bad_addr = 140737488339952, offset = 140737327463633, chunk_begin = 140737488340224, chunk_size = 140737488340370, user_requested_alignment = 2272, access_type = 0, alloc_type = 3}}, delete_size = 0, delete_alignment = 140733193388034}, FreeNotMalloced = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, free_stack = 0x7fff00000000, addr_description = {data = {kind = 4185842272, {shadow = {addr = 8, kind = (unknown: 0xe0), shadow_byte = 196 '\304'}, heap = {addr = 8, alloc_tid = 140737488340192, free_tid = 0, alloc_stack_id = 4294951920, free_stack_id = 32767, chunk_access = {bad_addr = 140737327463633, offset = 140737488340224, chunk_begin = 140737488340370, chunk_size = 140737488341216, user_requested_alignment = 0, access_type = 0, alloc_type = 0}}, stack = {addr = 8, tid = 140737488340192, offset = 0, frame_pc = 140737488339952, access_size = 140737327463633, frame_descr = 0x7fffffffc500 "\200\353\242VUU"}, global = {addr = 8, static kMaxGlobals = 4, globals = {{beg = 140737488340192, size = 0, size_with_redzone = 140737488339952, name = 0x7ffff668fcd1 <__vsnprintf_internal+145> "H\213U\350dH+\024%(", module_name = 0x7fffffffc500 "\200\353\242VUU", has_dynamic_init = 140737488340370, gcc_location = 0x7fffffffc8e0, odr_indicator = 0}, {beg = 140733193388034, size = 140737488341312, size_with_redzone = 140737488340080, name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", module_name = 0xc0 , has_dynamic_init = 140737488342016, gcc_location = 0x2000f9618f40, odr_indicator = 140737346767651}, {beg = 1721971619, size = 27, size_with_redzone = 140737488340160, name = 0x7fffffffc930 "\004\373\217\367\377\177", module_name = 0x0, has_dynamic_init = 140737346796292, gcc_location = 0x1000, odr_indicator = 140737488342096}, {beg = 35188556009472, size = 140737346767651, size_with_redzone = 35188556012992, name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", module_name = 0xc0 , has_dynamic_init = 140737488342144, gcc_location = 0x2000f9618f40, odr_indicator = 140737346767651}}, reg_sites = {4294952104, 32767, 8, 0}, access_size = 4096, size = 248 '\370'}, wild = {addr = 8, access_size = 140737488340192}}}}}, AllocTypeMismatch = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, dealloc_stack = 0x7fff00000000, alloc_type = 4185842272, dealloc_type = 8192, addr_description = {data = {kind = 8, {shadow = {addr = 140737488340192, kind = __asan::kShadowKindLow, shadow_byte = 0 '\000'}, heap = {addr = 140737488340192, alloc_tid = 0, free_tid = 140737488339952, alloc_stack_id = 4134075601, free_stack_id = 32767, chunk_access = {bad_addr = 140737488340224, offset = 140737488340370, chunk_begin = 140737488341216, chunk_size = 0, user_requested_alignment = 2, access_type = 0, alloc_type = 0}}, stack = {addr = 140737488340192, tid = 0, offset = 140737488339952, frame_pc = 140737327463633, access_size = 140737488340224, frame_descr = 0x7fffffffc592 "\332UUU"}, global = {addr = 140737488340192, static kMaxGlobals = 4, globals = {{beg = 0, size = 140737488339952, size_with_redzone = 140737327463633, name = 0x7fffffffc500 "\200\353\242VUU", module_name = 0x7fffffffc592 "\332UUU", has_dynamic_init = 140737488341216, gcc_location = 0x0, odr_indicator = 140733193388034}, {beg = 140737488341312, size = 140737488340080, size_with_redzone = 140737346796292, name = 0xc0 , module_name = 0x7fffffffcc00 "", has_dynamic_init = 35188556009280, gcc_location = 0x7ffff78f8b23 <___interceptor_memset(void*, int, __sanitizer::uptr)+275>, odr_indicator = 1721971619}, {beg = 27, size = 140737488340160, size_with_redzone = 140737488341296, name = 0x0, module_name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", has_dynamic_init = 4096, gcc_location = 0x7fffffffcc50, odr_indicator = 35188556009472}, {beg = 140737346767651, size = 35188556012992, size_with_redzone = 140737346796292, name = 0xc0 , module_name = 0x7fffffffcc80 "\330`\332UUU", has_dynamic_init = 35188556009280, gcc_location = 0x7ffff78f8b23 <___interceptor_memset(void*, int, __sanitizer::uptr)+275>, odr_indicator = 140737488340136}}, reg_sites = {8, 0, 4096, 0}, access_size = 93825000956920, size = 96 '`'}, wild = {addr = 140737488340192, access_size = 0}}}}}, MallocUsableSizeNotOwned = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, addr_description = {data = {kind = 4185842272, {shadow = {addr = 8, kind = (unknown: 0xe0), shadow_byte = 196 '\304'}, heap = {addr = 8, alloc_tid = 140737488340192, free_tid = 0, alloc_stack_id = 4294951920, free_stack_id = 32767, chunk_access = {bad_addr = 140737327463633, offset = 140737488340224, chunk_begin = 140737488340370, chunk_size = 140737488341216, user_requested_alignment = 0, access_type = 0, alloc_type = 0}}, stack = {addr = 8, tid = 140737488340192, offset = 0, frame_pc = 140737488339952, access_size = 140737327463633, frame_descr = 0x7fffffffc500 "\200\353\242VUU"}, global = {addr = 8, static kMaxGlobals = 4, globals = {{beg = 140737488340192, size = 0, size_with_redzone = 140737488339952, name = 0x7ffff668fcd1 <__vsnprintf_internal+145> "H\213U\350dH+\024%(", module_name = 0x7fffffffc500 "\200\353\242VUU", has_dynamic_init = 140737488340370, gcc_location = 0x7fffffffc8e0, odr_indicator = 0}, {beg = 140733193388034, size = 140737488341312, size_with_redzone = 140737488340080, name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", module_name = 0xc0 , has_dynamic_init = 140737488342016, gcc_location = 0x2000f9618f40, odr_indicator = 140737346767651}, {beg = 1721971619, size = 27, size_with_redzone = 140737488340160, name = 0x7fffffffc930 "\004\373\217\367\377\177", module_name = 0x0, has_dynamic_init = 140737346796292, gcc_location = 0x1000, odr_indicator = 140737488342096}, {beg = 35188556009472, size = 140737346767651, size_with_redzone = 35188556012992, name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", module_name = 0xc0 , has_dynamic_init = 140737488342144, gcc_location = 0x2000f9618f40, odr_indicator = 140737346767651}}, reg_sites = {4294952104, 32767, 8, 0}, access_size = 4096, size = 248 '\370'}, wild = {addr = 8, access_size = 140737488340192}}}}}, SanitizerGetAllocatedSizeNotOwned = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, addr_description = {data = {kind = 4185842272, {shadow = {addr = 8, kind = (unknown: 0xe0), shadow_byte = 196 '\304'}, heap = {addr = 8, alloc_tid = 140737488340192, free_tid = 0, alloc_stack_id = 4294951920, free_stack_id = 32767, chunk_access = {bad_addr = 140737327463633, offset = 140737488340224, chunk_begin = 140737488340370, chunk_size = 140737488341216, user_requested_alignment = 0, access_type = 0, alloc_type = 0}}, stack = {addr = 8, tid = 140737488340192, offset = 0, frame_pc = 140737488339952, access_size = 140737327463633, frame_descr = 0x7fffffffc500 "\200\353\242VUU"}, global = {addr = 8, static kMaxGlobals = 4, globals = {{beg = 140737488340192, size = 0, size_with_redzone = 140737488339952, name = 0x7ffff668fcd1 <__vsnprintf_internal+145> "H\213U\350dH+\024%(", module_name = 0x7fffffffc500 "\200\353\242VUU", has_dynamic_init = 140737488340370, gcc_location = 0x7fffffffc8e0, odr_indicator = 0}, {beg = 140733193388034, size = 140737488341312, size_with_redzone = 140737488340080, name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", module_name = 0xc0 , has_dynamic_init = 140737488342016, gcc_location = 0x2000f9618f40, odr_indicator = 140737346767651}, {beg = 1721971619, size = 27, size_with_redzone = 140737488340160, name = 0x7fffffffc930 "\004\373\217\367\377\177", module_name = 0x0, has_dynamic_init = 140737346796292, gcc_location = 0x1000, odr_indicator = 140737488342096}, {beg = 35188556009472, size = 140737346767651, size_with_redzone = 35188556012992, name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", module_name = 0xc0 , has_dynamic_init = 140737488342144, gcc_location = 0x2000f9618f40, odr_indicator = 140737346767651}}, reg_sites = {4294952104, 32767, 8, 0}, access_size = 4096, size = 248 '\370'}, wild = {addr = 8, access_size = 140737488340192}}}}}, CallocOverflow = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, count = 35188557931104, size = 8}, ReallocArrayOverflow = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, count = 35188557931104, size = 8}, PvallocOverflow = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, size = 35188557931104}, InvalidAllocationAlignment = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, alignment = 35188557931104}, InvalidAlignedAllocAlignment = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, size = 35188557931104, alignment = 8}, InvalidPosixMemalignAlignment = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, alignment = 35188557931104}, AllocationSizeTooBig = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, user_size = 35188557931104, total_size = 8, max_size = 140737488340192}, RssLimitExceeded = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000}, OutOfMemory = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, requested_size = 35188557931104}, StringFunctionMemoryRangesOverlap = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, length1 = 35188557931104, length2 = 8, addr1_description = {data = {kind = 4294952160, {shadow = {addr = 0, kind = (unknown: 0xf0), shadow_byte = 195 '\303'}, heap = {addr = 0, alloc_tid = 140737488339952, free_tid = 140737327463633, alloc_stack_id = 4294952192, free_stack_id = 32767, chunk_access = {bad_addr = 140737488340370, offset = 140737488341216, chunk_begin = 0, chunk_size = 140733193388034, user_requested_alignment = 2368, access_type = 0, alloc_type = 3}}, stack = {addr = 0, tid = 140737488339952, offset = 140737327463633, frame_pc = 140737488340224, access_size = 140737488340370, frame_descr = 0x7fffffffc8e0 "\004\373\217\367\377\177"}, global = {addr = 0, static kMaxGlobals = 4, globals = {{beg = 140737488339952, size = 140737327463633, size_with_redzone = 140737488340224, name = 0x7fffffffc592 "\332UUU", module_name = 0x7fffffffc8e0 "\004\373\217\367\377\177", has_dynamic_init = 0, gcc_location = 0x7fff00000002, odr_indicator = 140737488341312}, {beg = 140737488340080, size = 140737346796292, size_with_redzone = 192, name = 0x7fffffffcc00 "", module_name = 0x2000f9618f40 "", has_dynamic_init = 140737346767651, gcc_location = 0x66a333a3, odr_indicator = 27}, {beg = 140737488340160, size = 140737488341296, size_with_redzone = 0, name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", module_name = 0x1000 , has_dynamic_init = 140737488342096, gcc_location = 0x2000f9619000, odr_indicator = 140737346767651}, {beg = 35188556012992, size = 140737346796292, size_with_redzone = 192, name = 0x7fffffffcc80 "\330`\332UUU", module_name = 0x2000f9618f40 "", has_dynamic_init = 140737346767651, gcc_location = 0x7fffffffc4a8, odr_indicator = 8}}, reg_sites = {4096, 0, 1440375800, 21845}, access_size = 140737488341856, size = 80 'P'}, wild = {addr = 0, access_size = 140737488339952}}}}, addr2_description = {data = {kind = 8, {shadow = {addr = 140737347155860, kind = __asan::kShadowKindGap, shadow_byte = 253 '\375'}, heap = {addr = 140737347155860, alloc_tid = 35188556037377, free_tid = 0, alloc_stack_id = 4153408260, free_stack_id = 32767, chunk_access = {bad_addr = 256, offset = 140737488343664, chunk_begin = 35188558858136, chunk_size = 140737346767651, user_requested_alignment = 1233, access_type = 3, alloc_type = 2}}, stack = {addr = 140737347155860, tid = 35188556037377, offset = 0, frame_pc = 140737346796292, access_size = 256, frame_descr = 0x7fffffffd270 "\001"}, global = {addr = 140737347155860, static kMaxGlobals = 4, globals = {{beg = 35188556037377, size = 0, size_with_redzone = 140737346796292, name = 0x100 , module_name = 0x7fffffffd270 "\001", has_dynamic_init = 35188558858136, gcc_location = 0x7ffff78f8b23 <___interceptor_memset(void*, int, __sanitizer::uptr)+275>, odr_indicator = 140737346778321}, {beg = 93824997642105, size = 93824997657132, size_with_redzone = 93824997755313, name = 0x555555a75baf "\360H\377\0051\275\363", module_name = 0x555555a76ff7 "\211E\374\360H\377\005\026\247\363", has_dynamic_init = 93824997230116, gcc_location = 0x7ffff662a1ca <__libc_start_call_main+122>, odr_indicator = 140737327047307}, {beg = 93824997201429, size = 7453010382234678117, size_with_redzone = 14737695520, name = 0xa72656c , module_name = 0x0, has_dynamic_init = 35188557926392, gcc_location = 0x1bf8ecd2e7d58800, odr_indicator = 35188557926912}, {beg = 140737488341776, size = 140737328989408, size_with_redzone = 140737292336416, name = 0x7ffff68044e0 <_IO_2_1_stderr_> "\207(\255\373", module_name = 0x7fffffffcc50 "@", has_dynamic_init = 93825007009792, gcc_location = 0x7ffff78cd321 <___interceptor_vfprintf(__sanitizer::__sanitizer_FILE *, const char *, typedef __va_list_tag __va_list_tag *)+177>, odr_indicator = 1}}, reg_sites = {4153775751, 32767, 48, 48}, access_size = 140737488342032, size = 48 '0'}, wild = {addr = 140737347155860, access_size = 35188556037377}}}}, function = 0x1bf8ecd2e7d58800 }, StringFunctionSizeOverflow = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, addr_description = {data = {kind = 4185842272, {shadow = {addr = 8, kind = (unknown: 0xe0), shadow_byte = 196 '\304'}, heap = {addr = 8, alloc_tid = 140737488340192, free_tid = 0, alloc_stack_id = 4294951920, free_stack_id = 32767, chunk_access = {bad_addr = 140737327463633, offset = 140737488340224, chunk_begin = 140737488340370, chunk_size = 140737488341216, user_requested_alignment = 0, access_type = 0, alloc_type = 0}}, stack = {addr = 8, tid = 140737488340192, offset = 0, frame_pc = 140737488339952, access_size = 140737327463633, frame_descr = 0x7fffffffc500 "\200\353\242VUU"}, global = {addr = 8, static kMaxGlobals = 4, globals = {{beg = 140737488340192, size = 0, size_with_redzone = 140737488339952, name = 0x7ffff668fcd1 <__vsnprintf_internal+145> "H\213U\350dH+\024%(", module_name = 0x7fffffffc500 "\200\353\242VUU", has_dynamic_init = 140737488340370, gcc_location = 0x7fffffffc8e0, odr_indicator = 0}, {beg = 140733193388034, size = 140737488341312, size_with_redzone = 140737488340080, name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", module_name = 0xc0 , has_dynamic_init = 140737488342016, gcc_location = 0x2000f9618f40, odr_indicator = 140737346767651}, {beg = 1721971619, size = 27, size_with_redzone = 140737488340160, name = 0x7fffffffc930 "\004\373\217\367\377\177", module_name = 0x0, has_dynamic_init = 140737346796292, gcc_location = 0x1000, odr_indicator = 140737488342096}, {beg = 35188556009472, size = 140737346767651, size_with_redzone = 35188556012992, name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", module_name = 0xc0 , has_dynamic_init = 140737488342144, gcc_location = 0x2000f9618f40, odr_indicator = 140737346767651}}, reg_sites = {4294952104, 32767, 8, 0}, access_size = 4096, size = 248 '\370'}, wild = {addr = 8, access_size = 140737488340192}}}}, size = 140737488341856}, BadParamsToAnnotateContiguousContainer = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, beg = 35188557931104, end = 8, old_mid = 140737488340192, new_mid = 0}, BadParamsToAnnotateDoubleEndedContiguousContainer = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, stack = 0x7fff00000000, storage_beg = 35188557931104, storage_end = 8, old_container_beg = 140737488340192, old_container_end = 0, new_container_beg = 140737488339952, new_container_end = 140737327463633}, ODRViolation = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, global1 = {beg = 140733193388032, size = 35188557931104, size_with_redzone = 8, name = 0x7fffffffc4e0 "", module_name = 0x0, has_dynamic_init = 140737488339952, gcc_location = 0x7ffff668fcd1 <__vsnprintf_internal+145>, odr_indicator = 140737488340224}, global2 = {beg = 140737488340370, size = 140737488341216, size_with_redzone = 0, name = 0x7fff00000002 , module_name = 0x7fffffffc940 "P\314\377\377\377\177", has_dynamic_init = 140737488340080, gcc_location = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516>, odr_indicator = 192}, stack_id1 = 4294953984, stack_id2 = 32767}, InvalidPointerPair = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, pc = 140733193388032, bp = 35188557931104, sp = 8, addr1_description = {data = {kind = 4294952160, {shadow = {addr = 0, kind = (unknown: 0xf0), shadow_byte = 195 '\303'}, heap = {addr = 0, alloc_tid = 140737488339952, free_tid = 140737327463633, alloc_stack_id = 4294952192, free_stack_id = 32767, chunk_access = {bad_addr = 140737488340370, offset = 140737488341216, chunk_begin = 0, chunk_size = 140733193388034, user_requested_alignment = 2368, access_type = 0, alloc_type = 3}}, stack = {addr = 0, tid = 140737488339952, offset = 140737327463633, frame_pc = 140737488340224, access_size = 140737488340370, frame_descr = 0x7fffffffc8e0 "\004\373\217\367\377\177"}, global = {addr = 0, static kMaxGlobals = 4, globals = {{beg = 140737488339952, size = 140737327463633, size_with_redzone = 140737488340224, name = 0x7fffffffc592 "\332UUU", module_name = 0x7fffffffc8e0 "\004\373\217\367\377\177", has_dynamic_init = 0, gcc_location = 0x7fff00000002, odr_indicator = 140737488341312}, {beg = 140737488340080, size = 140737346796292, size_with_redzone = 192, name = 0x7fffffffcc00 "", module_name = 0x2000f9618f40 "", has_dynamic_init = 140737346767651, gcc_location = 0x66a333a3, odr_indicator = 27}, {beg = 140737488340160, size = 140737488341296, size_with_redzone = 0, name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", module_name = 0x1000 , has_dynamic_init = 140737488342096, gcc_location = 0x2000f9619000, odr_indicator = 140737346767651}, {beg = 35188556012992, size = 140737346796292, size_with_redzone = 192, name = 0x7fffffffcc80 "\330`\332UUU", module_name = 0x2000f9618f40 "", has_dynamic_init = 140737346767651, gcc_location = 0x7fffffffc4a8, odr_indicator = 8}}, reg_sites = {4096, 0, 1440375800, 21845}, access_size = 140737488341856, size = 80 'P'}, wild = {addr = 0, access_size = 140737488339952}}}}, addr2_description = {data = {kind = 8, {shadow = {addr = 140737347155860, kind = __asan::kShadowKindGap, shadow_byte = 253 '\375'}, heap = {addr = 140737347155860, alloc_tid = 35188556037377, free_tid = 0, alloc_stack_id = 4153408260, free_stack_id = 32767, chunk_access = {bad_addr = 256, offset = 140737488343664, chunk_begin = 35188558858136, chunk_size = 140737346767651, user_requested_alignment = 1233, access_type = 3, alloc_type = 2}}, stack = {addr = 140737347155860, tid = 35188556037377, offset = 0, frame_pc = 140737346796292, access_size = 256, frame_descr = 0x7fffffffd270 "\001"}, global = {addr = 140737347155860, static kMaxGlobals = 4, globals = {{beg = 35188556037377, size = 0, size_with_redzone = 140737346796292, name = 0x100 , module_name = 0x7fffffffd270 "\001", has_dynamic_init = 35188558858136, gcc_location = 0x7ffff78f8b23 <___interceptor_memset(void*, int, __sanitizer::uptr)+275>, odr_indicator = 140737346778321}, {beg = 93824997642105, size = 93824997657132, size_with_redzone = 93824997755313, name = 0x555555a75baf "\360H\377\0051\275\363", module_name = 0x555555a76ff7 "\211E\374\360H\377\005\026\247\363", has_dynamic_init = 93824997230116, gcc_location = 0x7ffff662a1ca <__libc_start_call_main+122>, odr_indicator = 140737327047307}, {beg = 93824997201429, size = 7453010382234678117, size_with_redzone = 14737695520, name = 0xa72656c , module_name = 0x0, has_dynamic_init = 35188557926392, gcc_location = 0x1bf8ecd2e7d58800, odr_indicator = 35188557926912}, {beg = 140737488341776, size = 140737328989408, size_with_redzone = 140737292336416, name = 0x7ffff68044e0 <_IO_2_1_stderr_> "\207(\255\373", module_name = 0x7fffffffcc50 "@", has_dynamic_init = 93825007009792, gcc_location = 0x7ffff78cd321 <___interceptor_vfprintf(__sanitizer::__sanitizer_FILE *, const char *, typedef __va_list_tag __va_list_tag *)+177>, odr_indicator = 1}}, reg_sites = {4153775751, 32767, 48, 48}, access_size = 140737488342032, size = 48 '0'}, wild = {addr = 140737347155860, access_size = 35188556037377}}}}}, Generic = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, addr_description = {data = {kind = __asan::kAddressKindWild, {shadow = {addr = 35188557931104, kind = (unknown: 0x8), shadow_byte = 0 '\000'}, heap = {addr = 35188557931104, alloc_tid = 8, free_tid = 140737488340192, alloc_stack_id = 0, free_stack_id = 0, chunk_access = {bad_addr = 140737488339952, offset = 140737327463633, chunk_begin = 140737488340224, chunk_size = 140737488340370, user_requested_alignment = 2272, access_type = 0, alloc_type = 3}}, stack = {addr = 35188557931104, tid = 8, offset = 140737488340192, frame_pc = 0, access_size = 140737488339952, frame_descr = 0x7ffff668fcd1 <__vsnprintf_internal+145> "H\213U\350dH+\024%("}, global = {addr = 35188557931104, static kMaxGlobals = 4, globals = {{beg = 8, size = 140737488340192, size_with_redzone = 0, name = 0x7fffffffc3f0 "\300", module_name = 0x7ffff668fcd1 <__vsnprintf_internal+145> "H\213U\350dH+\024%(", has_dynamic_init = 140737488340224, gcc_location = 0x7fffffffc592, odr_indicator = 140737488341216}, {beg = 0, size = 140733193388034, size_with_redzone = 140737488341312, name = 0x7fffffffc470 "te-write-heap-use-after-free", module_name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", has_dynamic_init = 192, gcc_location = 0x7fffffffcc00, odr_indicator = 35188556009280}, {beg = 140737346767651, size = 1721971619, size_with_redzone = 27, name = 0x7fffffffc4c0 "", module_name = 0x7fffffffc930 "\004\373\217\367\377\177", has_dynamic_init = 0, gcc_location = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516>, odr_indicator = 4096}, {beg = 140737488342096, size = 35188556009472, size_with_redzone = 140737346767651, name = 0x2000f9619dc0 "", module_name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", has_dynamic_init = 192, gcc_location = 0x7fffffffcc80, odr_indicator = 35188556009280}}, reg_sites = {4153379619, 32767, 4294952104, 32767}, access_size = 8, size = 0 '\000'}, wild = {addr = 35188557931104, access_size = 8}}}}, pc = 93825000956920, bp = 140737488341856, sp = 140737488341840, access_size = 8, bug_descr = 0x7ffff7957794 "heap-use-after-free", is_write = true, shadow_val = 253 '\375'}}}, halt_on_error_ = true} 00:29:57.745 error = {<__asan::ErrorBase> = {scariness = {score = 52, descr = "8-byte-write-heap-use-after-free\000\177\000\000h\244\332U\001\000\000\000\220\277\377\377\377\177\000\000\036\245\332UUU\000\000T\004\215\371\000 \000\000\000\000\000\000\000\000\000\000@", '\000' , " ", '\000' , "\240\277\377\377\377\177\000\000\000\000\000\000\000\000\000\000\300\334a\371\000 \000\000\000\220\377\377\377\037\000\000\300\332\377\377\377\037\000\000\200\353\242VUU\000\000\000\300\377\377\377\177\000\000\252\246\332UUU\000\000\000\000\000\000\000\000\000\000p\246\332UUU\000\000\000\354(\364\000\177\000\000\000\000\000\000"...}, tid = 0}, addr_description = {data = {kind = __asan::kAddressKindWild, {shadow = {addr = 35188557931104, kind = (unknown: 0x8), shadow_byte = 0 '\000'}, heap = {addr = 35188557931104, alloc_tid = 8, free_tid = 140737488340192, alloc_stack_id = 0, free_stack_id = 0, chunk_access = {bad_addr = 140737488339952, offset = 140737327463633, chunk_begin = 140737488340224, chunk_size = 140737488340370, user_requested_alignment = 2272, access_type = 0, alloc_type = 3}}, stack = {addr = 35188557931104, tid = 8, offset = 140737488340192, frame_pc = 0, access_size = 140737488339952, frame_descr = 0x7ffff668fcd1 <__vsnprintf_internal+145> "H\213U\350dH+\024%("}, global = {addr = 35188557931104, static kMaxGlobals = 4, globals = {{beg = 8, size = 140737488340192, size_with_redzone = 0, name = 0x7fffffffc3f0 "\300", module_name = 0x7ffff668fcd1 <__vsnprintf_internal+145> "H\213U\350dH+\024%(", has_dynamic_init = 140737488340224, gcc_location = 0x7fffffffc592, odr_indicator = 140737488341216}, {beg = 0, size = 140733193388034, size_with_redzone = 140737488341312, name = 0x7fffffffc470 "te-write-heap-use-after-free", module_name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", has_dynamic_init = 192, gcc_location = 0x7fffffffcc00, odr_indicator = 35188556009280}, {beg = 140737346767651, size = 1721971619, size_with_redzone = 27, name = 0x7fffffffc4c0 "", module_name = 0x7fffffffc930 "\004\373\217\367\377\177", has_dynamic_init = 0, gcc_location = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516>, odr_indicator = 4096}, {beg = 140737488342096, size = 35188556009472, size_with_redzone = 140737346767651, name = 0x2000f9619dc0 "", module_name = 0x7ffff78ffb04 <__asan_region_is_poisoned(__sanitizer::uptr, __sanitizer::uptr)+516> "\204\300\017\205$\002", has_dynamic_init = 192, gcc_location = 0x7fffffffcc80, odr_indicator = 35188556009280}}, reg_sites = {4153379619, 32767, 4294952104, 32767}, access_size = 8, size = 0 '\000'}, wild = {addr = 35188557931104, access_size = 8}}}}, pc = 93825000956920, bp = 140737488341856, sp = 140737488341840, access_size = 8, bug_descr = 0x7ffff7957794 "heap-use-after-free", is_write = true, shadow_val = 253 '\375'} 00:29:57.745 #9 0x00007ffff7905bbe in __asan::ReportGenericError (pc=, bp=bp@entry=140737488341856, sp=sp@entry=140737488341840, addr=, is_write=is_write@entry=true, access_size=access_size@entry=8, exp=, fatal=true) at ../../../../src/libsanitizer/asan/asan_report.cpp:497 00:29:57.745 in_report = 00:29:57.745 error = 00:29:57.745 enable_fp = 00:29:57.745 #10 0x00007ffff790730c in __asan::__asan_report_store8 (addr=) at ../../../../src/libsanitizer/asan/asan_rtl.cpp:136 00:29:57.745 bp = 140737488341856 00:29:57.745 pc = 00:29:57.745 local_stack = 3392 00:29:57.745 sp = 140737488341840 00:29:57.745 #11 0x0000555555da63f8 in malloc_elem_free_list_remove (elem=0x2000f97ed240) at ../lib/eal/common/malloc_elem.c:418 00:29:57.745 No locals. 00:29:57.745 #12 0x0000555555da652b in malloc_elem_alloc (elem=0x2000f97ed240, size=64, align=64, bound=0, contig=false) at ../lib/eal/common/malloc_elem.c:437 00:29:57.745 new_elem = 0x2000f97ede80 00:29:57.745 old_elem_size = 3136 00:29:57.745 trailer_size = 0 00:29:57.745 #13 0x0000555555da8886 in heap_alloc (heap=0x1fffffffdac0, type=0x0, size=64, flags=0, align=64, bound=0, contig=false) at ../lib/eal/common/malloc_heap.c:246 00:29:57.745 elem = 0x2000f97ed240 00:29:57.745 user_size = 24 00:29:57.745 #14 0x0000555555daa468 in malloc_heap_alloc_on_heap_id (type=0x0, size=24, heap_id=0, flags=0, align=64, bound=0, contig=false) at ../lib/eal/common/malloc_heap.c:682 00:29:57.745 mcfg = 0x1fffffff9000 00:29:57.745 heap = 0x1fffffffdac0 00:29:57.745 size_flags = 0 00:29:57.745 socket_id = 0 00:29:57.745 ret = 0x555555da60d8 00:29:57.745 internal_conf = 0x555556a2eb80 00:29:57.745 #15 0x0000555555daa6aa in malloc_heap_alloc (type=0x0, size=24, socket_arg=-1, flags=0, align=64, bound=0, contig=false) at ../lib/eal/common/malloc_heap.c:757 00:29:57.745 socket = 0 00:29:57.745 heap_id = 0 00:29:57.745 i = -1194081590 00:29:57.745 ret = 0xfffffffff4154920 00:29:57.745 #16 0x0000555555dae2a3 in malloc_socket (type=0x0, size=24, align=64, socket_arg=-1, trace_ena=true) at ../lib/eal/common/rte_malloc.c:72 00:29:57.745 ptr = 0x7fffffffce50 00:29:57.745 #17 0x0000555555daea73 in rte_malloc_socket (type=0x0, size=24, align=64, socket_arg=-1) at ../lib/eal/common/rte_malloc.c:87 00:29:57.745 No locals. 00:29:57.745 #18 0x0000555555daebc4 in rte_zmalloc_socket (type=0x0, size=24, align=64, socket=-1) at ../lib/eal/common/rte_malloc.c:111 00:29:57.745 ptr = 0x7fffffffcee8 00:29:57.745 #19 0x0000555555ca28e7 in spdk_zmalloc (size=24, align=64, unused=0x0, socket_id=-1, flags=2) at env.c:42 00:29:57.745 No locals. 00:29:57.745 #20 0x0000555555a8a856 in nvme_ctrlr_queue_async_event (ctrlr=0x2000f98d0400, cpl=0x2000f98ba760) at nvme_ctrlr.c:3138 00:29:57.745 nvme_event = 0x1 00:29:57.745 proc = 0x2000f98b3e80 00:29:57.745 __func__ = "nvme_ctrlr_queue_async_event" 00:29:57.745 #21 0x0000555555a8bec6 in nvme_ctrlr_async_event_cb (arg=0x2000f98d0908, cpl=0x2000f98ba760) at nvme_ctrlr.c:3196 00:29:57.745 aer = 0x2000f98d0908 00:29:57.745 ctrlr = 0x2000f98d0400 00:29:57.745 __func__ = "nvme_ctrlr_async_event_cb" 00:29:57.745 #22 0x0000555555ad275b in nvme_complete_request (cb_fn=0x555555a8b973 , cb_arg=0x2000f98d0908, qpair=0x2000f98d0220, req=0x2000f98cf700, cpl=0x2000f98ba760) at /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_internal.h:1430 00:29:57.745 err_cpl = {cdw0 = 0, cdw1 = 0, sqhd = 0, sqid = 0, cid = 0, {status_raw = 0, status = {p = 0, sc = 0, sct = 0, crd = 0, m = 0, dnr = 0}}} 00:29:57.745 cmd = 0x0 00:29:57.745 __PRETTY_FUNCTION__ = "nvme_complete_request" 00:29:57.745 #23 0x0000555555ae00e3 in nvme_pcie_qpair_complete_tracker (qpair=0x2000f98d0220, tr=0x2000f98ae000, cpl=0x2000f98ba760, print_on_error=true) at nvme_pcie_common.c:706 00:29:57.745 pqpair = 0x2000f98d01c0 00:29:57.745 req = 0x2000f98cf700 00:29:57.745 retry = false 00:29:57.745 error = false 00:29:57.745 print_error = false 00:29:57.745 __PRETTY_FUNCTION__ = "nvme_pcie_qpair_complete_tracker" 00:29:57.745 #24 0x0000555555ae30f9 in nvme_pcie_qpair_process_completions (qpair=0x2000f98d0220, max_completions=64) at nvme_pcie_common.c:925 00:29:57.745 pqpair = 0x2000f98d01c0 00:29:57.745 tr = 0x2000f98ae000 00:29:57.745 cpl = 0x2000f98ba760 00:29:57.745 next_cpl = 0x2000f98ba770 00:29:57.745 num_completions = 1 00:29:57.745 ctrlr = 0x2000f98d0400 00:29:57.745 next_cq_head = 119 00:29:57.745 next_phase = 1 '\001' 00:29:57.745 next_is_valid = true 00:29:57.745 rc = 32767 00:29:57.745 __func__ = "nvme_pcie_qpair_process_completions" 00:29:57.745 __PRETTY_FUNCTION__ = "nvme_pcie_qpair_process_completions" 00:29:57.745 #25 0x0000555555b2b88e in nvme_transport_qpair_process_completions (qpair=0x2000f98d0220, max_completions=0) at nvme_transport.c:615 00:29:57.745 transport = 0x5555569cb800 00:29:57.745 __PRETTY_FUNCTION__ = "nvme_transport_qpair_process_completions" 00:29:57.745 #26 0x0000555555b07742 in spdk_nvme_qpair_process_completions (qpair=0x2000f98d0220, max_completions=0) at nvme_qpair.c:799 00:29:57.745 ret = 0 00:29:57.745 req = 0x1 00:29:57.745 tmp = 0xffffe841640 00:29:57.745 __func__ = "spdk_nvme_qpair_process_completions" 00:29:57.745 #27 0x0000555555a9e871 in spdk_nvme_ctrlr_process_admin_completions (ctrlr=0x2000f98d0400) at nvme_ctrlr.c:4371 00:29:57.745 num_completions = 0 00:29:57.745 rc = 0 00:29:57.745 active_proc = 0xffffe841640 00:29:57.745 #28 0x0000555555a166fd in spdk_aer_temperature_test () at aer.c:464 00:29:57.745 dev = 0x55555699b320 00:29:57.745 #29 0x0000555555a18c66 in main (argc=7, argv=0x7fffffffd5e8) at aer.c:675 00:29:57.745 dev = 0x55555699b440 00:29:57.745 opts = {name = 0x5555562996a0 "aer", core_mask = 0x5555562996e0 "0x1", lcore_map = 0x0, shm_id = 0, mem_channel = -1, main_core = -1, mem_size = -1, no_pci = false, hugepage_single_segments = false, unlink_hugepage = false, no_huge = false, num_pci_addr = 0, hugedir = 0x0, pci_blocked = 0x0, pci_allowed = 0x0, iova_mode = 0x0, base_virtaddr = 35184372088832, env_context = 0x0, vf_token = 0x0} 00:29:57.745 rc = 0 00:29:57.745 detach_ctx = 0x0 00:29:57.745 00:29:57.745 -- 00:29:59.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:29:59.650 Waiting for block devices as requested 00:29:59.650 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:06.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:30:06.216 Cleaning 00:30:06.216 Removing: /var/run/dpdk/spdk0/config 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0_93300 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0_93301 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1_93300 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1_93301 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2_93300 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2_93301 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3_93300 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3_93301 00:30:06.216 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:06.216 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:06.216 Removing: /var/run/dpdk/spdk0/mp_socket 00:30:06.216 Removing: /var/run/dpdk/spdk0/mp_socket_93300_367bb0518f2 00:30:06.216 Removing: /var/run/dpdk/spdk0/mp_socket_93301_367d853d624 00:30:06.216 Removing: /dev/shm/spdk_tgt_trace.pid60565 00:30:06.216 Removing: /var/tmp/spdk_pci_lock_0000:00:06.0 00:30:06.216 Removing: /var/tmp/spdk_cpu_lock_001 00:30:06.216 Removing: /var/tmp/spdk_cpu_lock_002 00:30:06.216 Removing: /var/tmp/spdk_cpu_lock_003 00:30:06.216 Removing: /var/run/dpdk/spdk0 00:30:06.216 Removing: /var/run/dpdk/spdk_pid60361 00:30:06.216 Removing: /var/run/dpdk/spdk_pid60565 00:30:06.216 Removing: /var/run/dpdk/spdk_pid60823 00:30:06.216 Removing: /var/run/dpdk/spdk_pid61069 00:30:06.216 Removing: /var/run/dpdk/spdk_pid61243 00:30:06.216 Removing: /var/run/dpdk/spdk_pid61347 00:30:06.216 Removing: /var/run/dpdk/spdk_pid61447 00:30:06.216 Removing: /var/run/dpdk/spdk_pid61562 00:30:06.216 Removing: /var/run/dpdk/spdk_pid61663 00:30:06.216 Removing: /var/run/dpdk/spdk_pid61697 00:30:06.216 Removing: /var/run/dpdk/spdk_pid61739 00:30:06.216 Removing: /var/run/dpdk/spdk_pid61806 00:30:06.216 Removing: /var/run/dpdk/spdk_pid61907 00:30:06.216 Removing: /var/run/dpdk/spdk_pid62394 00:30:06.216 Removing: /var/run/dpdk/spdk_pid62476 00:30:06.216 Removing: /var/run/dpdk/spdk_pid62552 00:30:06.216 Removing: /var/run/dpdk/spdk_pid62576 00:30:06.216 Removing: /var/run/dpdk/spdk_pid62715 00:30:06.216 Removing: /var/run/dpdk/spdk_pid62733 00:30:06.216 Removing: /var/run/dpdk/spdk_pid62872 00:30:06.216 Removing: /var/run/dpdk/spdk_pid62888 00:30:06.216 Removing: /var/run/dpdk/spdk_pid62952 00:30:06.216 Removing: /var/run/dpdk/spdk_pid62978 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63042 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63068 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63245 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63287 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63329 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63399 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63481 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63513 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63586 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63612 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63653 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63683 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63731 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63757 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63798 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63830 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63871 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63897 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63938 00:30:06.216 Removing: /var/run/dpdk/spdk_pid63975 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64016 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64042 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64089 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64115 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64156 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64182 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64234 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64260 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64307 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64333 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64375 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64407 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64453 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64479 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64526 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64552 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64593 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64629 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64671 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64703 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64748 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64786 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64830 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64869 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64920 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64946 00:30:06.216 Removing: /var/run/dpdk/spdk_pid64987 00:30:06.216 Removing: /var/run/dpdk/spdk_pid65019 00:30:06.216 Removing: /var/run/dpdk/spdk_pid65066 00:30:06.216 Removing: /var/run/dpdk/spdk_pid65146 00:30:06.216 Removing: /var/run/dpdk/spdk_pid65257 00:30:06.216 Removing: /var/run/dpdk/spdk_pid65422 00:30:06.216 Removing: /var/run/dpdk/spdk_pid65507 00:30:06.216 Removing: /var/run/dpdk/spdk_pid65560 00:30:06.216 Removing: /var/run/dpdk/spdk_pid66757 00:30:06.216 Removing: /var/run/dpdk/spdk_pid66957 00:30:06.216 Removing: /var/run/dpdk/spdk_pid67148 00:30:06.216 Removing: /var/run/dpdk/spdk_pid67263 00:30:06.216 Removing: /var/run/dpdk/spdk_pid67379 00:30:06.216 Removing: /var/run/dpdk/spdk_pid67449 00:30:06.216 Removing: /var/run/dpdk/spdk_pid67480 00:30:06.216 Removing: /var/run/dpdk/spdk_pid67511 00:30:06.216 Removing: /var/run/dpdk/spdk_pid67929 00:30:06.216 Removing: /var/run/dpdk/spdk_pid68006 00:30:06.216 Removing: /var/run/dpdk/spdk_pid68116 00:30:06.216 Removing: /var/run/dpdk/spdk_pid68174 00:30:06.216 Removing: /var/run/dpdk/spdk_pid69261 00:30:06.216 Removing: /var/run/dpdk/spdk_pid70071 00:30:06.216 Removing: /var/run/dpdk/spdk_pid70876 00:30:06.216 Removing: /var/run/dpdk/spdk_pid71894 00:30:06.216 Removing: /var/run/dpdk/spdk_pid72984 00:30:06.216 Removing: /var/run/dpdk/spdk_pid73960 00:30:06.216 Removing: /var/run/dpdk/spdk_pid75318 00:30:06.216 Removing: /var/run/dpdk/spdk_pid76422 00:30:06.216 Removing: /var/run/dpdk/spdk_pid77515 00:30:06.216 Removing: /var/run/dpdk/spdk_pid78126 00:30:06.216 Removing: /var/run/dpdk/spdk_pid78620 00:30:06.216 Removing: /var/run/dpdk/spdk_pid79194 00:30:06.216 Removing: /var/run/dpdk/spdk_pid79627 00:30:06.216 Removing: /var/run/dpdk/spdk_pid80135 00:30:06.216 Removing: /var/run/dpdk/spdk_pid80631 00:30:06.216 Removing: /var/run/dpdk/spdk_pid81217 00:30:06.216 Removing: /var/run/dpdk/spdk_pid81698 00:30:06.216 Removing: /var/run/dpdk/spdk_pid82935 00:30:06.216 Removing: /var/run/dpdk/spdk_pid83473 00:30:06.216 Removing: /var/run/dpdk/spdk_pid83954 00:30:06.216 Removing: /var/run/dpdk/spdk_pid85294 00:30:06.216 Removing: /var/run/dpdk/spdk_pid85887 00:30:06.216 Removing: /var/run/dpdk/spdk_pid86458 00:30:06.216 Removing: /var/run/dpdk/spdk_pid87141 00:30:06.216 Removing: /var/run/dpdk/spdk_pid87191 00:30:06.216 Removing: /var/run/dpdk/spdk_pid87240 00:30:06.216 Removing: /var/run/dpdk/spdk_pid87288 00:30:06.216 Removing: /var/run/dpdk/spdk_pid87406 00:30:06.216 Removing: /var/run/dpdk/spdk_pid87549 00:30:06.216 Removing: /var/run/dpdk/spdk_pid87759 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88015 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88028 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88071 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88097 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88120 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88146 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88175 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88196 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88221 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88251 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88271 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88300 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88326 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88346 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88375 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88401 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88421 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88451 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88470 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88496 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88543 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88562 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88602 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88670 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88709 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88725 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88766 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88786 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88807 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88854 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88878 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88912 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88936 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88950 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88970 00:30:06.216 Removing: /var/run/dpdk/spdk_pid88990 00:30:06.216 Removing: /var/run/dpdk/spdk_pid89009 00:30:06.216 Removing: /var/run/dpdk/spdk_pid89023 00:30:06.216 Removing: /var/run/dpdk/spdk_pid89043 00:30:06.216 Removing: /var/run/dpdk/spdk_pid89082 00:30:06.216 Removing: /var/run/dpdk/spdk_pid89120 00:30:06.216 Removing: /var/run/dpdk/spdk_pid89142 00:30:06.216 Removing: /var/run/dpdk/spdk_pid89177 00:30:06.216 Removing: /var/run/dpdk/spdk_pid89198 00:30:06.216 Removing: /var/run/dpdk/spdk_pid89218 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89272 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89290 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89329 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89343 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89361 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89382 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89396 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89416 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89434 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89455 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89535 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89610 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89742 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89758 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89806 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89858 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89885 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89910 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89938 00:30:06.476 Removing: /var/run/dpdk/spdk_pid89982 00:30:06.476 Removing: /var/run/dpdk/spdk_pid90007 00:30:06.476 Removing: /var/run/dpdk/spdk_pid90082 00:30:06.476 Removing: /var/run/dpdk/spdk_pid90134 00:30:06.476 Removing: /var/run/dpdk/spdk_pid90177 00:30:06.476 Removing: /var/run/dpdk/spdk_pid90411 00:30:06.476 Removing: /var/run/dpdk/spdk_pid90518 00:30:06.476 Removing: /var/run/dpdk/spdk_pid90558 00:30:06.476 Removing: /var/run/dpdk/spdk_pid90643 00:30:06.476 Removing: /var/run/dpdk/spdk_pid90709 00:30:06.476 Removing: /var/run/dpdk/spdk_pid90745 00:30:06.476 Removing: /var/run/dpdk/spdk_pid90975 00:30:06.476 Removing: /var/run/dpdk/spdk_pid91136 00:30:06.476 Removing: /var/run/dpdk/spdk_pid91228 00:30:06.476 Removing: /var/run/dpdk/spdk_pid91277 00:30:06.476 Removing: /var/run/dpdk/spdk_pid91298 00:30:06.476 Removing: /var/run/dpdk/spdk_pid91373 00:30:06.476 Removing: /var/run/dpdk/spdk_pid91763 00:30:06.476 Removing: /var/run/dpdk/spdk_pid91794 00:30:06.476 Removing: /var/run/dpdk/spdk_pid92086 00:30:06.476 Removing: /var/run/dpdk/spdk_pid92195 00:30:06.476 Removing: /var/run/dpdk/spdk_pid92290 00:30:06.476 Removing: /var/run/dpdk/spdk_pid92332 00:30:06.476 Removing: /var/run/dpdk/spdk_pid92358 00:30:06.476 Removing: /var/run/dpdk/spdk_pid92389 00:30:06.476 Clean 00:30:06.735 killing process with pid 51392 00:30:06.735 killing process with pid 51393 00:30:06.735 05:27:25 -- common/autotest_common.sh@1436 -- # return 1 00:30:06.735 05:27:25 -- common/autotest_common.sh@1 -- # : 00:30:06.735 05:27:25 -- common/autotest_common.sh@1 -- # exit 1 01:31:48.489 Cancelling nested steps due to timeout 01:31:48.492 Sending interrupt signal to process 01:32:00.588 script returned exit code 255 01:32:00.593 [Pipeline] } 01:32:00.617 [Pipeline] // timeout 01:32:00.625 [Pipeline] } 01:32:00.647 [Pipeline] // stage 01:32:00.653 [Pipeline] } 01:32:00.658 Timeout has been exceeded 01:32:00.658 org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: a5d5ae57-8095-4ffd-8f0e-f84518d1142d 01:32:00.658 Setting overall build result to ABORTED 01:32:00.677 [Pipeline] // catchError 01:32:00.687 [Pipeline] stage 01:32:00.689 [Pipeline] { (Stop VM) 01:32:00.704 [Pipeline] sh 01:32:00.985 + vagrant halt 01:32:04.271 ==> default: Halting domain... 01:32:09.545 [Pipeline] sh 01:32:09.822 + vagrant destroy -f 01:32:13.108 ==> default: Removing domain... 01:32:13.121 [Pipeline] sh 01:32:13.404 + mv output /var/jenkins/workspace/ubuntu24-vg-autotest/output 01:32:13.413 [Pipeline] } 01:32:13.431 [Pipeline] // stage 01:32:13.437 [Pipeline] } 01:32:13.455 [Pipeline] // dir 01:32:13.461 [Pipeline] } 01:32:13.479 [Pipeline] // wrap 01:32:13.485 [Pipeline] } 01:32:13.500 [Pipeline] // catchError 01:32:13.509 [Pipeline] stage 01:32:13.511 [Pipeline] { (Epilogue) 01:32:13.526 [Pipeline] sh 01:32:13.809 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:32:23.800 [Pipeline] catchError 01:32:23.803 [Pipeline] { 01:32:23.819 [Pipeline] sh 01:32:24.101 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:32:24.102 Artifacts sizes are good 01:32:24.111 [Pipeline] } 01:32:24.129 [Pipeline] // catchError 01:32:24.141 [Pipeline] archiveArtifacts 01:32:24.148 Archiving artifacts 01:32:26.493 [Pipeline] cleanWs 01:32:26.505 [WS-CLEANUP] Deleting project workspace... 01:32:26.505 [WS-CLEANUP] Deferred wipeout is used... 01:32:26.511 [WS-CLEANUP] done 01:32:26.514 [Pipeline] } 01:32:26.532 [Pipeline] // stage 01:32:26.538 [Pipeline] } 01:32:26.554 [Pipeline] // node 01:32:26.560 [Pipeline] End of Pipeline 01:32:26.602 Finished: ABORTED